Here - Hyejin Youn

Document technical information

Format pdf
Size 1.3 MB
First found May 22, 2018

Document content analysis

Category Also themed
Language
English
Type
not defined
Concepts
no text concepts found

Persons

Organizations

Places

Transcript

Supporting Information: On the Universal Structure of Human Lexical
Semantics
Hyejin Youn,1, 2, 3 Logan Sutton,4 Eric Smith,3, 5 Cristopher Moore,3 Jon F.
Wilkins,3, 6 Ian Maddieson,7, 8 William Croft,7 and Tanmoy Bhattacharya3, 9
1
Institute for New Economic Thinking at the Oxford Martin School, Oxford, OX2 6ED, UK
2
3
4
Mathematical Institute, University of Oxford, Oxford, OX2 6GG, UK
Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501, USA
American Studies Research Institute, Indiana University, Bloomington, IN 47405, USA
5
Earth-Life Sciences Institute, Tokyo Institute of Technology,
2-12-1-IE-1 Ookayama, Meguro-ku, Tokyo, 152-8550, Japan
6
7
Department of Linguistics, University of New Mexico, Albuquerque, NM 87131, USA
8
9
Ronin Institute, Montclair, NJ 07043
Department of Linguistics, University of California, Berkeley, CA 94720, USA
MS B285, Grp T-2, Los Alamos National Laboratory, Los Alamos, NM 87545, USA.
2
CONTENTS
I. Methodology for Data Collection and Analysis
3
A. Criteria for selection of meanings
3
B. Criteria for selection of languages
4
C. Semantic analysis of word senses
6
D. Bidirectional translation, and linguists’ judgments on aggregation of meanings
9
E. Trimming, collapsing, projecting
II. Notation and Methods of Network Representation
A. Network representations
15
16
16
1. Multi-layer network representation
17
2. Directed-hyper-graph representation
19
3. Projection to directed simple graphs and aggregation over target languages
19
B. Model for semantic space represented as a topology
20
1. Interpretation of the model into network representation
21
2. Beyond the available sample data
22
C. The aggregated network of meanings
23
D. Synonymous polysemy: correlations within and among languages
23
E. Node degree and link presence/absence data
26
F. Node degree and Swadesh meanings
26
III. Universal Structure: Conditional dependence
27
A. Comparing semantic networks between language groups
28
1. Mantel test
28
2. Hierarchical clustering test
29
B. Statistical significance
32
C. Single-language graph size is a significant summary statistic
32
D. Conclusion
33
IV. Model for Degree of Polysemy
33
A. Aggregation of language samples
33
B. Independent sampling from the aggregate graph
34
1. Statistical tests
34
3
2. Product model with intrinsic property of concepts
36
3. Product model with saturation
37
C. Single instances as to aggregate representation
41
1. Power tests and uneven distribution of single-language p-values
42
2. Excess fluctuations in degree of polysemy
43
3. Correlated link assignments
45
References
I.
48
METHODOLOGY FOR DATA COLLECTION AND ANALYSIS
The following selection criteria for languages and words, and recording criteria from dictionaries,
were used to provide a uniform treatment across language groups, and to compensate where possible
for systematic variations in documenting conventions. These choices are based on the expert
judgment of authors WC, LS, and IM in typology and comparative historical linguistics.
A.
Criteria for selection of meanings
Our translations use only lexical concepts as opposed to grammatical inflections or function
words. For the purpose of universality and stability of meanings across cultures, we chose entries
from the Swadesh 200-word list of basic vocabulary. Among these, we have selected categories that
are likely to have single-word representation for meanings, and for which the referents are material
entities or natural settings rather than social or conceptual abstractions. We have selected 22 words
in domains concerning natural and geographic features, so that the web of polysemy will produce
a connected graph whose structure we can analyze, rather than having an excess of disconnected
singletons. We have omitted body parts—which by the same criteria would provide a similarly
appropriate connected domain—because these have been considered previously [1–4]. The final set
of 22 words are as follows:
• Celestial Phenomena and Related Time Units:
STAR, SUN, MOON, YEAR, DAY/DAYTIME, NIGHT
• Landscape Features:
SKY, CLOUD(S), SEA/OCEAN, LAKE, RIVER, MOUNTAIN
4
• Natural Substances:
STONE/ROCK, EARTH/SOIL, SAND, ASH(ES), SALT, SMOKE, DUST, FIRE, WATER,
WIND
B.
Criteria for selection of languages
The statistical analysis of typological features of languages inevitably requires assumptions
about which observations are independent samples from an underlying generative process. Since
languages of the world have varying degrees of relatedness, language features are subject to Galton’s problem of non-independence of samples, which can only be overcome with a full historical
reconstruction of relations. However, long-range historical relations are not known or not accepted
for most language families of the world [5]. It has become accepted practice to restrict to single
representatives of each genus in statistical typological analyses [6, 7].1
In order to minimize redundant samples within our data, we selected only one language from
each genus-level family [8]. The sample consists of 81 languages chosen from 535 genera in order to
maximize geographical diversity, taking into consideration population size, presence or absence of a
written language, environment and climate, and availability of a good quality bilingual dictionary.
The list of languages in our sample, sorted by geographical region and phylogenetic affiliation
is given in Table I, and the geographical distribution is shown in Fig. 1. The contributions of
languages to our dataset, including numbers of words and of polysemies, are shown as a function
of language ranked by each language’s number of speakers in Fig. 2.
FIG. 1. Geographical distribution of selected languages. The color map represents the number of languages
included in our study for each country. White indicates that no language is selected and the dark orange
implies that 18 languages are selected. For example, the United States has 18 languages included in our
study because of the diversity of Native American languages.
1
As long as the proliferation of members within a language family is not correlated with their typological characteristics, this restriction provides no protection against systematic bias, and in general it must be weighed against
the contribution of more languages to resolution or statistical power.
5
Region
Africa
Family
Khoisan
Genus
Northern
Central
Southern
Niger-Kordofanian NW Mande
Southern W. Atlantic
Defoid
Igboid
Cross River
Bantoid
Nilo-Saharan
Saharan
Kuliak
Nilotic
Bango-Bagirmi-Kresh
Afro-Asiatic
Berber
West Chadic
E Cushitic
Semitic
Eurasia
Basque
Basque
Indo-European
Armenian
Indic
Albanian
Italic
Slavic
Uralic
Finnic
Altaic
Turkic
Mongolian
Japanese
Japanese
Chukotkan
Kamchatkan
Caucasian
NW Caucasian
Nax
Katvelian
Kartvelian
Dravidian
Dravidian Proper
Sino-Tibetan
Chinese
Karen
Kuki-Chin-Naga
Burmese-Lolo
Naxi
Oceania Hmong-Mien
Hmong-Mien
Austroasiatic
Munda
Palaung-Khmuic
Aslian
Daic
Kam-Tai
Austronesian
Oceanic
Papuan
Middle Sepik
E NG Highlands
Angan
C and SE New Guinea
West Bougainville
East Bougainville
Australian
Gunwinyguan
Maran
Pama-Nyungan
Americas Eskimo-Aleut
Aleut
Na-Dene
Haida
Athapaskan
Algic
Algonquian
Salishan
Interior Salish
Wakashan
Wakashan
Siouan
Siouan
Caddoan
Caddoan
Iroqoian
Iroquoian
Coastal Penutian Tsimshianic
Klamath
Wintuan
Miwok
Gulf
Muskogean
Mayan
Mayan
Hokan
Yanan
Yuman
Uto-Aztecan
Numic
Hopi
Otomanguean
Zapotecan
Paezan
Warao
Chimúan
Quechuan
Quechua
Araucanian
Araucanian
Tupı́-Guaranı́
Tupı́-Guaranı́
Macro-Arawakan
Harákmbut
Maipuran
Macro-Carib
Carib
Peba-Yaguan
Language
Ju|’hoan
Khoekhoegowab
!Xóõ
Bambara
Kisi
Yorùbá
Igbo
Efik
Swahili
Kanuri
Ik
Nandi
Kaba Démé
Tumazabt
Hausa
Rendille
Iraqi Arabic
Basque
Armenian
Hindi
Albanian
Spanish
Russian
Finnish
Turkish
Khalkha Mongolian
Japanese
Itelmen (Kamchadal)
Kabardian
Chechen
Georgian
Badaga
Mandarin
Karen (Bwe)
Mikir
Hani
Naxi
Hmong Njua
Sora
Minor Mlabri
Semai (Sengoi)
Thai
Trukese
Kwoma
Yagaria
Baruya
Kolari
Rotokas
Buin
Nunggubuyu
Mara
E and C Arrernte
Aleut
Haida
Koyukon
Western Abenaki
Thompson Salish
Nootka (Nuuchahnulth)
Lakhota
Pawnee
Onondaga
Coast Tsimshian
Klamath
Wintu
Northern Sierra Miwok
Creek
Itzá Maya
Yana
Cocopa
Tümpisa Shoshone
Hopi
Quiavini Zapotec
Warao
Mochica/Chimu
Huallaga Quechua
Mapudungun (Mapuche)
Guaranı́
Amarakaeri
Piro
Carib
Yagua
TABLE I. The languages included in our study. Notes: Oceania includes Southeast Asia; the Papuan
languages do not form a single phylogenetic group in the view of most historical linguists; other families
in the table vary in their degree of acceptance by historical linguists. The classification at the genus level,
which is of greater importance to our analysis, is more generally agreed upon.
6
FIG. 2. Vocabulary measures of languages in the dataset ranked in descending order of the size of the speaker
populations. Population sizes are taken from Ethnologue. Each language is characterized by the number
of meanings in our polysemy dataset, of unique meanings, of non-unique meanings defined by exclusion of
all single occurrences, and of polysemous words (those having multiple meanings), plotted in blue, green,
red, and cyan, respectively. We find a nontrivial correlation between population of speakers and data size
of languages.
C.
Semantic analysis of word senses
All of the bilingual dictionaries translated object language words into English, or in some
cases, Spanish, French, German or Russian (bilingual dictionaries in the other major languages
were used in order to gain maximal phylogenetic and geographic distribution). That is, we use
English and the other major languages as the semantic metalanguage for the word senses of the
object language words used in the analysis. English (or any natural language) is an imperfect
semantic metalanguage, because English itself has many polysemous words and divides the space
of concepts in a partly idiosyncratic way. This is already apparent in Swadesh’s own list: he
treated STONE/ROCK and EARTH/SOIL as synonyms, and had to specify that DAY referred
to DAYTIME as opposed to NIGHT, rather than a 24-hour period. However, the selection of a
concrete semantic domain including many discrete objects such as SUN and MOON allowed us to
avoid the much greater problems of semantic comparison in individuating properties and actions
or social and psychological concepts.
We followed lexicographic practice in individuating word senses across the languages. Lexicographers are aware of polysemies such as DAYTIME vs. 24 HOUR PERIOD and usually indicate
these semantic distinctions in their dictionary entries. There were a number of cases in which
7
different lexicographers appeared to use near-synonyms when the dictionaries were compared in
our cross-linguistic analysis. We believe that these choices of near-synonyms in English translations may not reflect genuine subtle semantic differences but may simply represent different choices
among near-synonyms made by different lexicographers. These near-synonyms were treated as a
single sense in the polysemy analysis; they are listed in Table II.
8
anger
ASH(ES)
bodily gases
celebrity
country
darkness
darkness
dawn
debris
EARTH/SOIL
evening
feces
fireplace
flood
flow
ground
haze
heat
heaven
liquid
lodestar
mark
mist
mold
MOUNTAIN
mountainous region
NIGHT
noon
passion
pile
pond
slope
spring
steam
storm
stream
sunlight
swamp
time
world
fury, rage
cinders
fart, flatulence, etc.
famous person, luminary
countryside, region, area, territory, etc. [bounded area]
dark (n.)
dark
daybreak, sunrise
rubbish, trash, garbage
dirt, loam, humus [= substance]
twilight, dusk, nightfall
dung, excrement, excreta
hearth
deluge
flowing water
land [= non-water surface]
smog
warmth
heavens, Heaven, firmament, space [= place, surface up above]
fluid
Pole star
dot, spot, print, design, letter, etc.
steam, vapor, spray
mildew, downy mildew
mount, peak
mountain range
nighttime
midday
ardor, fervor, enthusiasm, strong desire, intensity
heap, mound
pool [= small body of still water]
hillside
water source
vapor
gale, tempest
brook, creek [small flowing water in channel]
daylight, sunshine
marsh
time of day (e.g. ‘what time is it?’)
earth/place
TABLE II. Senses treated as synonyms in our study.
9
D.
Bidirectional translation, and linguists’ judgments on aggregation of meanings
For each of the initial 22 Swadesh entries, we have recorded all translations from the metalanguage into the target languages, and then the back-translations of each of these into the
metalanguage. Back-translation results in the additional meanings beyond the original 22 Swadesh
meanings.
A word in a target language is considered polysemous if its back-translation includes multiple words representing multiple senses as described in subsection I C. In cases where the backtranslation produces the same sense through more than one word in the target language, we call
it synonymous polysemy, and we take into account the degeneracy of each such polysemy in our
analysis as weighted links. The set of translations/back-translations of all 22 Swadesh meanings
for each target language constitutes our characterization of that language. The pool of translations
over the 81 target languages is the complete data set.
The dictionaries used in our study are listed below.
1. Dickens, Patrick. 1994. English-Ju|’hoan, Ju|’hoan-English dictionary. Köln: Rüdiger
Köppe Verlag.
2. Haacke, Wilfrid H. G. and Eliphas Eiseb. 2002. A Khoekhoegowab dictionary, with an
English-Khoekhoegowab index. Windhoek: Gamsberg Macmillan.
3. Traill, Anthony. 1994. A !Xóõ dictionary. Köln: Rüdiger Köppe Verlag.
4. Bird, Charles and Mamadou Kanté. Bambara-English English-Bambara Student Lexicon.
Bloomington: Indiana University Linguistics Club.
5. Childs, G. Tucker. 2000. A dictionary of the Kisi language, with an English-Kisi index.
Köln: Rüdiger Köppe Verlag.
6. Wakeman, C. W. (ed.). 1937. A dictionary of the Yoruba language. Ibadan: Oxford University Press.
7. Abraham, R. C. 1958. Dictionary of modern Yoruba. London: University of London Press.
8. Welmers, Beatrice F. & William E. Welmers. 1968. Igbo: a learners dictionary. Los Angeles:
University of California, Los Angeles and the United States Peace Corps.
9. Goldie, Hugh. 1964. Dictionary of the Efik Language. Ridgewood, N.J.
10. Awde, Nicholas. 2000. Swahili Practical Dictionary. New York: Hippocrene Books.
11. Johnson, Frederick. 1969. Swahili-English Dictionary. New York: Saphrograph.
12. Kirkeby, Willy A. 2000. English-Swahili Dictionary. Dar es Salaam: Kakepela Publishing
Company (T) LTD.
10
13. Cyffer, Norbert. 1994. English-Kanuri Dictionary. Köln: Rüdiger Köppe Verlag.
14. Cyffer, Norbert and John Hutchison (eds.). 1990. A Dictionary of the Kanuri Language.
Dordrecht: Foris Publications.
15. Heine, Bernd. 1999. Ik dictionary. Köln: Rüdiger Köppe Verlag.
16. Creider, Jane Tapsubei and Chet A. Creider. 2001. A Dictionary of the Nandi Language.
Köln: Rüdiger Köppe Verlag.
17. Palayer, Pierre, with Massa Solekaye. 2006. Dictionnaire démé (Tchad), précédé de notes
grammaticales. Louvain: Peeters.
18. Delheure, J. 1984. Dictionnaire mozabite-français. Paris: SELAF. [Tumzabt]
19. Abraham, R. C. 1962. Dictionary of the Hausa language (2nd ed.). London: University of
London Press.
20. Awde, Nicholas. 1996. Hausa-English English-Hausa Dictionary. New York: Hippocrene
Books.
21. Skinner, Neil. 1965. Kamus na turanci da hausa: English-Hausa Dictionary. Zaria, Nigeria:
The Northern Nigerian Publishing Company.
22. Pillinger, Steve and Letiwa Galboran. 1999. A Rendille Dictionary. Köln: Rüdiger Köppe
Verlag.
23. Clarity, Beverly E., Karl Stowasser, and Ronald G. Wolfe (eds.) and D. R. Woodhead and
Wayne Beene (eds.). 2003. A dictionary of Iraqi Arabic: English-Arabic, Arabic-English.
Washington, DC: Georgetown University Press.
24. Aulestia, Gorka. 1989. Basque-English Dictionary. Reno: University of Nevada Press.
25. Aulestia, Gorka and Linda White. 1990. English-Basque Dictionary. Reno:University of
Nevada Press.
26. Aulestia, Gorka and Linda White. 1992. Basque-English English-Basque Dictionary. Reno:
University of Nevada Press.
27. Koushakdjian, Mardiros and Dicran Khantrouni. 1976. English-Armenian Modern Dictionary. Beirut: G. Doniguian & Sons.
28. McGregor, R.S. (ed.). 1993. The Oxford Hindi-English Dictionary. Oxford: Oxford University Press.
29. Pathak, R.C. (ed.). 1966. Bhargavas Standard Illustrated Dictionary of the English Language
(Anglo-Hindi edition). Chowk, Varanasi, Banaras: Shree Ganga Pustakalaya.
30. Prasad, Dwarka. 2008. S. Chands Hindi-English-Hindi Dictionary. New Delhi: S. Chand &
Company.
11
31. Institut Nauk Narodnoj Respubliki Albanii. 1954. Russko-Albanskij Slovar’. Moscow: Gosudarstvennoe Izdatel’stvo Inostrannyx i Natsional’nyx Slovarej.
32. Newmark, Leonard (ed.). 1998. Albanian-English Dictionary. Oxford/New York: Oxford
University Press.
33. Orel, Vladimir. 1998. Albanian Etymological Dictionary. Leiden/Boston/Köln: Brill.
34. MacHale, Carlos F. et al. 1991. VOX New College Spanish and English Dictionary. Lincolnwood, IL: National Textbook Company.
35. The Oxford Spanish Dictionary. 1994. Oxford/New York/Madrid: Oxford University Press.
36. Mjuller, V. K. Anglo-russkij Slovar’. Izd. Sovetskaja Enciklopedija.
37. Ozhegov. Slovar’ Russkogo Jazyka. Gos. Izd. Slovarej.
38. Smirnickij, A. I. Russko-anglijskij Slovar’. Izd. Sovetskaja Enciklopedija.
39. Hurme, Raija, Riitta-Leena Malin, and Olli Syäoja. 1984. Uusi Suomi-Englanti SuurSanakirja. Helsinki: Werner Söderström Osakeyhtiö.
40. Hurme, Raija, Maritta Pesonen, and Olli Syväoja. 1990. Englanti-Suomi Suur-Sanakirja:
English-Finnish General Dictionary. Helsinki: Werner Söderström Osakeyhtiö.
41. Bayram, Ali, Ş. Serdar Türet, and Gordon Jones. 1996. Turkish-English Comprehensive
Dictionary. Istanbul: Fono/Hippocrene Books.
42. Hony, H. C. 1957. A Turkish-English Dictionary. Oxford: Oxford University Press.
43. Bawden, Charles. 1997. Mongolian-English Dictionary. London/New York: Kegan Paul
International.
44. Hangin, John G. 1970. A Concise English-Mongolian Dictionary. Indiana University Publications Volume 89, Uralic and Altaic Series. Bloomington: Indiana University.
45. Masuda, Koh (Ed.). 1974. Kenkyushas New Japanese-English Dictionary. Tokyo: Kenkyusha
Limited.
46. Worth, Dean S. 1969. Dictionary of Western Kamchadal. (University of California Publications in Linguistics 59.) Berkeley and Los Angeles: University of California Press.
47. Jaimoukha, Amjad M. 1997. Kabardian-English Dictionary, Being a Literary Lexicon of
East Circassian (First Edition). Amman: Sanjalay Press.
48. Klimov, G.A. and M.Š. Xalilov. 2003. Slovar Kavkazskix Jazykov. Moscow: Izdatelskaja
Firma.
49. Lopatinskij, L. 1890. Russko-Kabardinskij Slovar i Kratkoju Grammatikoju. Tiflis: Tipografija Kantseljarii Glavnonačalstvujuščago graždanskoju častju na Kavkaz.
50. Aliroev, I. Ju. 2005. Čečensko-Russkij Slovar. Moskva: Akademia.
12
51. Aliroev, I. Ju. 2005. Russko-Čečenskij Slovar. Moskva: Akademia.
52. Amirejibi, Rusudan, Reuven Enoch, and Donald Rayfield. 2006. A Comprehensive GeorgianEnglish Dictionary. London: Garnett Press.
53. Gvarjalaze, Tamar. 1974. English-Georgian and Georgian-English Dictionary. Tbilisi:
Ganatleba Publishing House.
54. Hockings, Paul and Christiane Pilot-Raichoor. 1992. A Badaga-English dictionary. Berlin:
Mouton de Gruyter.
55. Institute of Far Eastern Languages, Yale University. 1966. Dictionary of Spoken Chinese.
New Haven: Yale University Press.
56. Henderson Eugénie J. A. 1997. Bwe Karen Dictionary, with texts and English-Karen word
list, vol. II: dictionary and word list. London: University of London School of Oriental and
African Studies.
57. Walker, G. D. 1925/1995. A dictionary of the Mikir language. New Delhi: Mittal Publications (reprint).
58. Lewis, Paul and Bai Bibo. 1996. Hani-English, English-Hani dictionary. London: Kegan
Paul International.
59. Pinson, Thomas M. 1998. Naqxi-Habaq-Yiyu Geezheeq Ceeqhuil: Naxi-Chinese-English Glossary with English and Chinese index. Dallas: Summer Institute of Linguistics.
60. Heimbach, Ernest E. 1979. White Hmong-English Dictionary. Ithaca: Cornell Southeast
Asia Program, Linguistic Series IV.
61. Ramamurti, Rao Sahib G.V. 1933. English-Sora Dictionary. Madras: Government Press.
62. Ramamurti, Rao Sahib G.V. 1986. Sora-English Dictionary. Delhi: Mittal Publications.
63. Rischel, Jørgen. 1995. Minor Mlabri: a hunter-gatherer language of Northern Indochina.
Copenhagen: Museum Tusculanum Press.
64. Means, Nathalie and Paul B. Means. 1986. Sengoi-English, English-Sengoi dictionary.
Toronto: The Joint Centre on Modern East Asia, University of Toronto and York University.
[Semai]
65. Becker, Benjawan Poomsan. 2002. Thai-English, English-Thai Dictionary. Bangkok/Berkeley:
Paiboon Publishing.
66. Goodenough, Ward and Hiroshi Sugita. 1980. Trukese-English dictionary. (Memoirs of the
American Philosophical Society, 141.) Philadelphia: American Philosophical Society.
67. Goodenough, Ward and Hiroshi Sugita. 1990. Trukese-English dictionary, Supplementary
volume: English-Trukese and index of Trukese word roots. (Memoirs of the American Philo-
13
sophical Society, 141S.) Philadelphia: American Philosophical Society.
68. Bowden, Ross. 1997. A dictionary of Kwoma, a Papuan language of the north-east New
Guinea. (Pacific Linguistics, C-134.) Canberra: The Australian National University.
69. Renck, G. L. 1977. Yagaria dictionary. (Pacific Linguistics, Series C, No. 37.) Canberra:
Research School of Pacific Studies, Australian National University.
70. Lloyd, J. A. 1992. A Baruya-Tok Pisin-English dictionary. (Pacific Linguistics, C-82.)
Canberra: The Australian National University.
71. Dutton, Tom. 2003. A dictionary of Koiari, Papua New Guinea, with grammar notes.
(Pacific Linguistics, 534.) Canberra: Australia National University.
72. Firchow, Irwin, Jacqueline Firchow, and David Akoitai. 1973. Vocabulary of Rotokas-PidginEnglish. Ukarumpa, Papua New Guinea: Summer Institute of Linguistics.
73. Laycock, Donald C. 2003. A dictionary of Buin, a language of Bougainville. (Pacific Linguistics, 537.) Canberra: The Australian National University.
74. Heath, Jeffrey. 1982. Nunggubuyu Dictionary. Canberra: Australian Institute of Aboriginal
Studies.
75. Heath, Jeffrey. 1981. Basic Materials in Mara: Grammar, Texts, Dictionary. (Pacific Linguistics, C60.) Canberra: Research School of Pacific Studies, Australian National University.
76. Henderson, John and Veronica Dobson. 1994. Eastern and Central Arrernte to English
Dictionary. Alice Springs: Institute for Aboriginal Development.
77. Bergsland, Knut. 1994. Aleut dictionary: unangam tunudgusii. Fairbanks: Alaska Native
Language Center, University of Alaska.
78. Enrico, John. 2005. Haida dictionary: Skidegate, Masset and Alaskan dialects, 2 vols. Fairbanks and Juneau, Alaska: Alaska Native Language Center and Sealaska Heritage Institute.
79. Jetté, Jules and Eliza Jones. 2000. Koyukon Athabaskan dictionary. Fairbanks: Alaska
Native Language Center.
80. Day, Gordon M. 1994. Western Abenaki Dictionary. Hull, Quebec: Canadian Museum of
Civilization.
81. Thompson, Laurence C. and M. Terry Thompson (compilers). 1996. Thompson River Salish Dictionary. (University of Montana Occasional Papers in Linguistics 12.). Missoula,
Montana: University of Montana Linguistics Laboratory.
82. Stonham, John. 2005. A Concise Dictionary of the Nuuchahnulth Language of Vancouver
Island. Native American Studies 17. Lewiston/Queenston/Lampeter: The Edwin Mellen
Press.
14
83. Lakota Language Consortium. 2008. New Lakota Dictionary. Bloomington: Lakhota Language Consortium.
84. Parks, Douglas R. and Lula Nora Pratt. 2008. A dictionary of Skiri Pawnee. Lincoln:
University of Nebraska Press.
85. Woodbury, Hanni. 2003. Onondaga-English / English-Onondaga Dictionary. Toronto: University of Toronto Press.
86. Dunn, John Asher.
Smalgyax: A Reference Dictionary and Grammar for the Coast
Tsimshian Language. Seattle: University of Washington Press. [Coast Tsimshian]
87. Barker, M. A. R. 1963. Klamath Dictionary. University of California Publications in Linguistics 31. Berkeley: University of California Press.
88. Pitkin, Harvey. Wintu Dictionary. (University of California Publications in Linguistics, 95).
Berkeley and Los Angeles: University of California Press.
89. Callaghan, Catherine A. 1987. Northern Sierra Miwok Dictionary. University of California
Publications in Linguistics 110. Berkeley/Los Angeles/London: University of California
Press.
90. Martin, Jack B. and Margaret McKane Mauldin. 2000. A dictionary of Creek/Muskogee.
Omaha: University of Nebraska Press.
91. Hofling, Charles Andrew and Félix Fernando Tesucùn. 1997. Itzaj Maya-Spanish-English
Dictionary/Diccionario Maya Itaj-Español-Ingles. Salt Lake City: University of Utah Press.
92. Sapir, Edward and Morris Swadesh. Yana Dictionary (University of California Papers in
Linguistics, 22). Berkeley: University of California Press.
93. Crawford, James Mack, Jr. 1989. Cocopa Dictionary. University of California Publications
in Linguistics Vol. 114, University of California Press.
94. Dayley, Jon P. 1989. Tümpisa (Panamint) Shoshone Dictionary. (University of California
Publications in Linguistics 116.) Berkeley: Univ. of California Press.
95. Hopi Dictionary Project (compilers). 1998. Hopi Dictionary/Hopı̀ikwa Lavàytutuveni: A
Hopi-English dictionary of the Third Mesa Dialect. Tucson: University of Arizona Press.
96. Munro, Pamela, & Felipe H. Lopez. 1999. Di’csyonaary X:tèe’n Dı̀i’zh Sah Sann Lu’uc: San
Lucas Quiavinı́ Zapotec Dictionary: Diccionario Zapoteco de San Lucas Quiavinı́ (2 vols.).
Los Angeles: UCLA Chicano Studies Research Center.
97. de Barral, Basilio M.a. 1957. Diccionario Guarao-Español, Español-Guarao. Sociedad de
Ciencias Naturales La Salle, Monografias 3. Caracas: Editorial Sucre.
98. Brüning, Hans Heinrich.
2004.
Mochica Wörterbuch/Diccionario Mochica: Mochica-
15
Castellano/Castellano-Mochica.
Lima: Universidad de San Martin de Porres, Escuela
Profesional de Turismo y Hotelerı́a.
99. Salas, Jose Antonio. 2002. Diccionario Mochica-Castellano/Castellano-Mochica. Lima:
Universidad de San Martin de Porres, Escuela Profesional de Turismo y Hotelerı́a.
100. Weber, David John, Félix Cayco Zambrano, Teodoro Cayco Villar, Marlene Ballena Dvila.
1998. Rimaycuna: Quechua de Huanuco. Lima: Instituto Lingüı́stico de Verano.
101. Catrileo, Marı́a. 1995. Diccionario Linguistico-Etnografico de la Lengua Mapuche. Santiago:
Editorial Andrés Bello.
102. Erize, Esteban. 1960. Diccionario Comentado Mapuche-Español. Buenos Aires: Cuadernos
del Sur.
103. Britton, A. Scott. 2005. Guaranı́-English, English-Guaranı́ Concise Dictionary. New York:
Hippocrene Books, Inc.
104. Mayans, Antonio Ortiz.
1973.
Nuevo Diccionario Español-Guaranı́, Guarani-Español
(Décima Edición). Buenos Aires: Librerı́a Platero Editorial.
105. Tripp, Robert. 1995. Diccionario Amarakaeri-Castellano. Série Lingüı́stica Peruana 34.
Instituto Lingüı́stico de Verano: Ministerio de Educacion.
106. Matteson, Esther. 1965. The Piro (Arawakan) Language. University of California Publications in Linguistics 42. Berkeley/Los Angeles: University of California Press.
107. Mosonyi, Jorge C. 2002. Diccionario Básico del Idioma Kariña. Barcelona, Venezuela:
Gobernación del Estado Anzoátegui, Dirección de Cultura, Fondo Editorial del Caribe.
108. Powlison, Paul S. 1995. Nijyami Niquejadamusiy May Niquejadamuju, May Niquejadamusiy
Nijyami Niquejadamuju: Diccionario Yagua-Castellano. Série Lingüı́stica Peruana 35. Instituto Lingüı́stico de Verano: Ministerio de Educacion.
E.
Trimming, collapsing, projecting
Our choice of starting categories is meant to minimize culturally or geographically specific
associations, but inevitably these enter through polysemy that results from metaphor or metonymy.
To attempt to identify polysemies that express some degree of cognitive universality rather than
pure cultural “accident”, we include in this report only polysemies that occurred in two or more
languages in the sample. The original data comprises 2263 words, translated from a starting list of
22 Swadesh meanings, and 826 meanings as distinguished by English translations. After removal
of the polysemies occurring in only a single language, the dataset was reduced to 2257 words and
16
236 meanings. Figure 3 shows that this results in little difference in the statistics of weighted and
unweighted degrees.
Finally, as detailed below, the most fine-grained representation of the data preserves all English
translations to all words in each target language. To produce aggregate summary statistics, we
have projected this fine-grained, heterogeneous, directed graph onto the shared English-language
nodes, with appropriately redefined links, to produce coarser-level directed and undirected graphs.
Specifically, we define a weighted graph whose nodes are English words, where each link has an
integer-valued weight equal to the number of translation-back-translation paths between them. We
show this procedure in more detail in the next section.
heaven
month
air stream
hill
ground
DUST
EARTH/SOIL
STONE/ROCK DUST
WATER
SKY
EARTH/SOIL
WIND
FIRE
ASH(ES)
MOUNTAIN
...
world
rain
EARTH/SOIL
SKY FIRE
DAY/DAYTIME
WIND
WATER
STON/ROCK
DUST
SUN
MOUNTAIN
...
YEAR
divinity
WATER
stream
liquid
FIG. 3. Rank plot of meanings in descending order of their degree and strengths. This figure is an expanded
version of Fig. 4 from the main text, in which singly-attested polysemies are retained.
II.
NOTATION AND METHODS OF NETWORK REPRESENTATION
A.
Network representations
Networks provide a general and flexible class of topological representations for relations in
data [9]. Here we define the network representations that we construct from translations and
17
back-translation to identify polysemies.
1.
Multi-layer network representation
We represent translation and back-translation with three levels of graphs, as shown in Fig. 4.
Panel (a) shows the treatment of two target languages: Coast Tsimshian and Lakhota, by a multilayer graph. To specify the procedure, the nodes are separated into three types shown in the three
layers, corresponding to the input and output English words, and their target-language translations.
Two types of links represent translation from English to target languages, and back-translations
to English, indicated as arrows bridging the layers.2
Two initial Swadesh entries, labeled S ∈ {MOON, SUN}, are shown in the first row. Words wSL in
language L ∈ {Coast Tsimshian, Lakhota} obtained by translation of entry S are shown in the secCoast Tsimshian = {gooypah, gyemk, . . .} and w Lakhota = {haηhépi wı́, haηwı́, wı́, . . .}.
ond row, i.e., wMOON
MOON
Directed links tSw take values
tSw

 1 if S is translated into w
=
,
 0 otherwise
(1)
The bottom row shows targets mS obtained by back-translation of all words wSL (fixing S
or L as appropriate) into English. Here mMOON = {MOON, month, heat, SUN}. By construction, S is
always present in the set of values taken by mS . Back-translation links twm take values
twm

 1 if w is translated into m
=
 0 otherwise
(2)
The sets [tSw ] and [twm ] can therefore be considered adjacency matrices that link the Swadesh list
to each target-language dictionary and the target-language dictionary to the full English lexicon.3
We denote the multi-layer network representing a single target language L by G L composed of
nodes {S}, {w} and {m} and matrices of links [tSw ] and [twm ] connecting them. Continuing with
the example of G Coast Tsimshian , we see that tgooypah,month = 0, while tgooypah,MOON = 1. One such
network is constructed for each language, leading to 81 polysemy networks G L for this study.
2
3
In this graph, we regard English inputs and outputs as having different type to emphasize the asymmetric roles of
the Swadesh entries from secondary English words introduced by back-translation. The graph could equivalently be
regarded as a bipartite graph with only English and non-English nodes, and directed links representing translation.
Link direction would then implicitly distinguish Swadesh from non-Swadesh English entries.
More formally, indices S, w, and m are random variables taking values, respectively, in the sets of 22 Swadesh
entries, target-language entries in all 81 languages, and the full English lexicon. Subscripts and superscripts are
then used to restrict the values of these random variables, so that wSL takes values only among the words in
language L that translate Swadesh entry S, and mS takes values only among the English words that are polysemes
of S in some target language. We indicate random variables in math italic, and the values they take in Roman.
18
(a)
Sample level: S
MOON
translation tSw
SUN
Words level: wL
haŋwí
gyemgmáatk
wC
wí
wL gooypah
haŋhépi_wí
áŋpawí
gyemk
gimgmdziws
backtranslation twm
Meaning level: m
MOON
month
m
heat
SUN
Red: Coast_Tsimshian
Blue: Lakhota
(b)
1
3
MOON
(d)
(c)
Coast_Tsimshian
:
MOON
1
SUN
1 1
month
MOON
2
1
1
heat
SUN
MOON
(e)
MOON
6
MOON
2
month
1
3
Bipartite: {S} and {m}
2
Lakhota
heat
MOON
SUN
1
heat
2
2
heat
1
Unipartite: {m}
4
1
2
1
month
SUN
1 2
1
SUN
2
month
1
2
SUN
2
SUN
(f)
kLS1(MOON)
kLS3(SUN)
Coast Tsimshian
Lakhota
6
5
5
4
...
...
...
kLS3(YEAR)
...
...
...
kLS4(STAR)
...
...
...
...
...
...
...
FIG. 4. Schematic figure of the construction of network representations. Panel (a) illustrates the multilayer polysemy network from inputs MOON and SUN for two American languages: Coast Tsimshian and
Lakhota. Panels (b) and (c) show the directed bipartite graphs for the two languages individually, which
lose information about the multiple-polysemes “gyemk” and “wı́’ found respectively in Coast Tsimshian and
Lakhota. Panel (d) shows the bipartite directed graph formed from the union of links in graphs (b) and
(c). Link weights indicate the total number of translation/back-translation paths that connect each pair of
English-language words. Panel (e) shows the unipartite directed graph formed by identifying and merging
Swadesh entries in two different layers. Link weights here are the number of polysemies across languages
in which at least one polysemous word connects the two concepts. Directed links go from the Swadesh-list
seed words (MOON and P
SUN here) to English words found in the back-translation step. Panel (f) is a table
of link numbers nL
=
and twm are binary (0 or 1) to express, respectively, a link
S
w,m tSw twm where tSwP
from S to w, and from w to m in this paper.
w tSw twm gives the number of paths between S and m in
network representations.
19
L
L
The forward translation matrix TL
> ≡ [tSw ] has size 22×Y , where Y is the number of distinct
translation in language L of all Swadesh entries, and the reverse translation matrix TL
< ≡ [twm ] has
size Y L × Z L , where Z L is the number of distinct back-translation to English through all targets
in language L. For example, Y Coast Tsimshian = 27 and Z Coast Tsimshian = 33.
2.
Directed-hyper-graph representation
It is common that multipartite simple graphs have an equivalent expression in terms of directed
hyper-graphs [10]. A hyper-graph, like a simple graph, is defined from a set of nodes and a collection
of hyper-edges. Unlike edges in a simple graph, each of which has exactly two nodes as boundary
(dyadic), a hyper-edge can have an arbitrary set of nodes as its boundary. Directed hyper-edges
have boundaries defined by pairs of sets of nodes, called inputs and outputs, to the hyper-edge.
In a hyper-graph representation, we may regard all English entries as nodes, and words wSL
as hyper-edges. The input to each hyper-edge is a single Swadesh entry S, and the outputs are
the set of all back-translation mw . It is perhaps more convenient to regard the simple graph in
its bipartite, directed form, as the starting point for conversion to the equivalent hyper-graph. A
separate hyper-graph may be formed for each language, or the words from multiple languages may
be pooled as hyper-edges in an aggregate hyper-graph.
3.
Projection to directed simple graphs and aggregation over target languages
The hyper-graph representation is a complete reflection of the input data. However, hypergraphs are more cumbersome to analyze than simple networks, and the heterogeneous character
of hyper-edges can be an obstacle to simple forms of aggregation. Therefore, most of our analysis
is performed on a projection of the tri-partite graph onto a simple network with only one kind of
node (English words). The node set may continue to be regarded as segregated between inputs
and outputs to (now bidirectional) translation, leaving a bipartite network with two node types,
or alternatively we may pass directly to a simple directed graph in which all English entries are of
identical type, and the directionality of bidirectional translations carries all information about the
asymmetry between Swadesh and non-Swadesh entries with respect to our translation procedure.
Directed bipartite graph representations for Coast Tsimshian and Lakhota separately are shown in
Fig. 4 (b) and (c), respectively, and the aggregate bipartite network for the two target languages
is shown in Fig. 4 (d).
20
Projection of a tripartite graph to a simpler form implicitly entails a statistical model of aggregation. The projection we will use creates links with integer weights that are the sums of link
variables in the tripartite graph. The associated aggregation model is complicated to define: link
summation treats any single polysemy as a sample from an underlying process assumed to be uniform across words and languages; however, correlations arise due to multi-way polysemy, when a
Swadesh word translates to multiple words in a target language, and more than one of these words
translates back to the same English word. This creates multiple output-nodes on the boundaries
of hyper-edges, rendering these link weights non-independent, so that graph statistics are not automatically recovered by Poisson sampling defined only from the aggregate weights given to links.
We count the link polysemy between any Swadesh node S and any English output of bidirectional
translation m as a sum (e.g., within a single language L)4
tL
Sm =
X
tSwL twL m
S
S
L
= TL
> T<
Sm
L
wS
B.
.
(3)
Model for semantic space represented as a topology
As a mnemonic for the asymmetry between English entries as “meanings” and target-language
entries as “words”, we may think of these graphs as overlying a topological space of meanings, and
of words as “catching meanings in a set”, analogous to catching fish in the ocean using a variety of
nets. Any original Swadesh meaning is a “fish” at a fixed position in the ocean, and each targetn
o
language word wSL is one net that catches this fish. The back-translations m | twL m = 1 are all
S
L
other fish caught in the same net. If all distinct words wS are interpreted as random samples of
nets (a proposition which we must yet justify by showing the absence of other significant sources of
correlation), then the relative distance of fish (intrinsic separation of concepts in semantic space)
determines their joint-capture statistics within nets (the participation of different concept pairs in
polysemies).
The “ocean” in our underlying geometry is not 2- or 3-dimensional, but has a dimension corresponding to the number of significant principal components in the summary statistics from our
data. If we use a spectral embedding to define the underlying topology from a geometry based
on diffusion in Euclidean space, the dimension D of this embedding will equal to the number of
4
Note that for unidirectional links tSw or twm , we need not identify the language explicitly in the notation because
that identification is carried implicitly by the word w. For links in projected graphs it is convenient to label with
superscript L, because both arguments in all such links are English-language entries.
21
English-language entries recovered in the total sample, and a projection such as multi-dimensional
scaling may be used to select a smaller number of dimensions [11, 12]. In this embedding, diffusion
is isotropic and all “nets” are spherical. More generally, we could envision a lower-dimensional
“ocean” of meanings, and consider nets as ellipsoids characterized by eccentricity and principal
directions as well as central locations. This picture of the origin of polysemy from an inherent
semantic topology is illustrated in Fig. 5, and explained in further detail in the next section.
moonlight
luna
gooypah
satellite
gyemk
wí
MOON
heat
month
SUN
Green: Spanish
Blue: Lakhota
Red: Coast_Tsimshian
áŋpawí
FIG. 5. Hypothetical word-meaning and meaning-meaning relationships using a subset of the data from
Fig. 4. In this relation, translation and back-translation across different languages reveal polysemies through
which we measure a distance between one concept and another concept.
1.
Interpretation of the model into network representation
For an example, consider the projection of the small set of data shown in Fig. 4 (b) and (c).
Words in L = Coast Tsimshian are colored red. For these, we find S = MOON is connected to
Coast Tshimshian is, hence,
m = MOON via the three wSL values gyemgmáatk, gooypah, and gyemk. tMOON
MOON
Tshimshian = 1 (via gyemk). From the words in L = Lakhota, colored blue in
3, whereas tCoast
SUN heat
Lakhota
Fig. 4(c), we see that again tLakhota
MOON MOON = 3, while tSUN heat = 0 because there is no polysemy between
these entries in Lakhota.
Diffusion models of projection, which might be of interest of diachronic meaning shift in historical linguistics, is mediated by polysemous intermediates, suggest alternative choices of pro jection as well. Instead of counting numbers of polysemes wSL between some S and a given
m, a link might be labeled with the share of polysemy of S that goes to that m. This weightCoast Tshimshian = 3/6, because only gyemgmáatk, gooypah, and gyemk are in coming gives tMOON
MOON
Tshimshian = 1/6, because only gyemk is associated out of six polysemes
mon out of 6 and tCoast
MOON heat
22
between MOON and heat. In the interpretation of networks as Markov chains of diffusion processes, this weight gives the (normalized) probability of transition to m when starting from S, as
P
P P
t̂L
=
t
t
/
t
t
L
L
L
L
L
L
0
0
0
0
0
Sw w m
w
m
Sm
w
Sw
w m .
S
S
S
S
S
S
We may return to the analogy of catching fish in a high-dimensional sea, which is the underlying
(geometric or topological) semantic space, referring to Fig. 5. Due to the high dimensionality of
this sea, whether any particular fish m is caught depends on both the position and the angle with
which the net are cast. When the distance between S and m is very small, the angle may matter
little. A cast at a slightly different angle, if it caught S, would again catch m as well. If, instead,
m is far from the center of a net cast to catch S, only for a narrow range of angles will both S and
m be caught. An integer-weighted network measures the number of successes in catching the fish
m as a proxy for relative distance from S. The fractionally-weighted network allows us to consider
the probability of success of catching any fish other than S. If we cast a net many times but only
one succeeds in producing a polysemy, we should think that other meanings m are all remote from
S. Under a fractional weighting, the target language and the English Swadesh categorization may
have different rates of sampling, which appear in the translation dictionary. Our analysis uses
primarily the integer-weighted network.
2.
Beyond the available sample data
In the representation of the fine-grained graph as a directed, bipartite graph, English words S
and m, and target-language words w, are formally equivalent. The asymmetry in their roles comes
only from the asymmetry in our sampling protocol over instances of translation. An ideal, fully
symmetric dataset might contain translations between all pairs of languages (L, L0 ). In such a
dataset, polysemy with respect to any language L could be obtained by an equivalent projection of
all languages other than L. A test for the symmetry of the initial words in such a dataset can come
from projecting out all intermediate languages other than L and L0 , and comparing the projected
links from L to L0 through other intermediate languages, against the direct translation dictionary.
A possible area for future work from our current dataset (since curated all-to-all translation will not
be available in the foreseeable future), is to attempt to infer the best approximate translation maps,
Coast Tsimshian TLakhota
e.g. between Coast Tsimshian and Lakhota, through an intermediate sum T<
>
analogous to Eq. (3), as a measure of the overlap of graphs G Coast Tsimshian and G Lakhota .
23
EARTH/SOIL
SALT
SEA/OCEAN
DUST
ASHES
WATER
RIVER
SMOKE
SAND
FIRE
LAKE
CLOUD(S)
DAY/DAYTIME
STAR
SKY
MOUNTAIN
SUN
YEAR
MOON
STONE/ROCK
FIG. 6. Connectance graph of Swadesh meanings excluding non-Swadesh English words.
C.
The aggregated network of meanings
The polysemy networks of 81 languages, constructed in the previous subsection, are aggregated
into one network structure as shown in Fig. 2 in the main text. Two types of nodes are distinguished
by the case of the label on each node. All-capital labels indicate Swadesh words while all-lowercase
indicate non-Swadesh words. The width of each link is the number of polysemes joining the two
meanings at its endpoints, including in this count the sum of all synonyms within each target
language that reproduce the same polysemy. For example, the thick link between SKY and heaven
implies the existence of the largest number of distinct polysemes between these two compared to
those between any two other entries in the graph.
D.
Synonymous polysemy: correlations within and among languages
Synonymous polysemy provides the first, in a series of tests that we will perform, to determine
whether the aggregate graph generated by addition of polysemy-links is a good summary statistic
for the process of word-meaning pairing in actual languages that leads to polysemy. The null model
for sampling from the aggregate graph is that each word from a some Swadesh entry S has a fixed
probability to be polysemous with a given meaning entry m, independent of the presence or absence
of any other polysemes of S with m in the same language. Violations of the null model include
excess synonymous polysemy (suggesting, in our picture of an underlying semantic space, that the
“proximity” of meanings is in part dynamically created by formation of polysemies, increasing their
likelihood of duplication), or deficit synonymous polysemy (suggesting that languages economize
24
1.5
1.4
1.3
1.2
1.1
1.0
0
10
20
30
40
50
60
FIG. 7. The number of synonymous polysemies within a language is correlated with the number of languages
containing a given polysemy. The horizontal axis indicates the number of languages (out of 81) in which a
Swadesh entry S is polysemous with a given meaning m for meanings found to be polysemous in at least two
languages. The vertical axis indicates the average number of synonymous polysemies per language in which
the polysemous meaning is represented. Circle areas are proportional to the number of meanings m over
which the average was taken. The red line represents the least-squares regression over all (non-averaged)
data and has slope and intercept of 0.0029 and 1.05, respectively.
on the semantic scope of words by avoiding duplications).
The data presented in Fig. 7 shows that if a given polysemy is represented in more languages, it
is also more likely to be captured by more than one word within a given language. This is consistent
with a model in which proximity relationships among meanings are preexisting. Models in which
the probability of a synonymous polysemy was either independent of the number of polysemous
languages (corresponding to a slope of zero in Fig. 7) or quadratic in the number of languages were
rejected by both AIC and BIC tests.
We partitioned the synonymous polysemy data and performed a series of Mann-Whitney U
tests. We partitioned all polysemies according to the following scheme: a polysemy was a member
of the set cs,p if the language contained the number of polysemies, p, for the given Swadesh word,
s of which shared this polysemous meaning. For each category, we constructed a list Ds,p of the
numbers of languages in which each polysemous meaning in the set cs,p is found. We then tested
all pairs of Ds1 ,p and Ds2 ,p for whether they could have been drawn from the same distribution.
25
D0,1 D1,1 0.167 2.53 × 10−433
D0,2 D1,2 0.333 1.08 × 10−170
D0,2 D2,2 0.0526 1.39 × 10−56
D1,2 D2,2 0.158
2.10 × 10−14
D0,3 D1,3 0.500
7.18 × 10−44
D0,3 D2,3 0.167
1.11 × 10−28
D0,3 D3,3 0.0222
4.81 × 10−9
D1,3 D2,3 0.333
6.50 × 10−5
D1,3 D2,3 0.0444
4.53 × 10−5
D2,3 D3,3 0.133
9.28 × 10−4
D0,4 D1,4 1.00
3.15 × 10−17
D0,4 D2,4 0.0714 1.86 × 10−13
D0,4 D3,4 0.0667
5.63 × 10−5
D1,4 D2,4 0.0714
7.66 × 10−4
D1,4 D3,4 0.0667
0.084
D2,4 D3,4 0.993
1.0
D0,5 D1,5 0.143
1.03 × 10−27
D0,5 D2,5 0.182
6.20 × 10−7
D0,5 D3,5 0.0323
5.09 × 10−6
D1,5 D2,5 1.27
0.35
D1,5 D3,5 0.226
0.15
D2,5 D3,5 0.177
0.06
D0,6 D1,6 1.00
0.15
D0,6 D2,6 0.0247
1.44 × 10−5
D1,6 D2,6 0.0247
0.06
D0,7 D1,7 1.00
0.20
In most comparisons, the null hypothesis that the two lists were drawn from the same distribution was strongly rejected, always in the direction where the list with the larger number of
synonymous polysemies (larger values of s) contained larger numbers, meaning that those polysemies were found in more different languages. For a few of the comparisons, the null hypothesis
was not rejected, corresponding to those cases where one or both lists included a small number of
entries (< 10).
26
The table II D shows all comparisons for lists of greater than length one. The first two columns
indicate which two lists are being compared. The third column gives the ratio of the median values
of the two lists, with values less than one indicating that the median of the list in the column one
is lower than the median of the list in the column two.
We return to demonstrate a slight excess of probability to include individual entries in polysemies, in Sec. IV.
E.
Node degree and link presence/absence data
The goodness of this graph as a summary statistic, and the extent to which the heterogeneity of
its node degree and the link topology reflect universals in the sense advocated by Greenberg [13],
may be defined as the extent to which individual language differences are explained as fluctuations
in random samples. We begin systematic tests of the goodness of the aggregate graph with the
degrees of its nodes, a coarse-grained statistic that is most likely to admit the null model of
random sampling, but which also has the best statistical resolution among the observables in our
data. These tests may later be systematically refined by considering the presence/absence statistics
of the set of polysemy links, their covariances, or higher-order moments of the network topology.
At each level of refinement we introduce a more restrictive test, but at the same time we lose
statistical power because the number of possible states grows faster than the data in our sample.
We let nL
m denote the degree of meaning m—defined as the sum of weights of links to m—
in language L. Here m may stand for either Swadesh or non-Swadesh entries ({S} ⊂ {m}).
P L
nm ≡
L nm is then the degree of meaning m in the aggregated graph of Fig. 2 (main text),
P
P
shown in a rank-size plot in Fig. 3. N ≡ m nm = m,L nL
m denotes the sum of all link weights
in the aggregated graph.
F.
Node degree and Swadesh meanings
The Swadesh list was introduced to provide a priority for the use of lexical items, which favored
universality, stability, and some notion of “core” or “basic” vocabulary. Experience within historical
linguistics suggests qualitatively that it satisfies these criteria well, but the role of the Swadesh
list within semantics has not been studied with quantitative metrics. We may check the degree to
which the items in our basic list are consistent with a notion of core vocabulary by studying their
position in the rank-size distribution of Fig. 4 in the main text.
27
Our sampling methodology naturally places the starting search words (capitalized black characters) high in the distribution, such as EARTH/SOIL, WATER and DAY/DAYTIME with more than
100 polysemous words, because they are all connected to other polysemes produced by polysemy
sampling. Words that are not in the original Swadesh list, but which are uncovered as polysemes
(red), are less fully sampled. They show high degree only if they are connected to multiple Swadesh
entries.5 These derived polysemes fall mostly in the power-law tail of the rank-size distribution in
Fig. 3. The few entries of high degree serve as candidates for inclusion in an expanded Swadesh
list, on the grounds that they are frequently recovered in basic vocabulary. Any severe violation
of the segregation of Swadesh from non-Swadesh entries (hence, the appearance of many derived
polysemes high in the rank-size distribution) would have indicated that the Swadesh entries were
embedded in a larger graph with high clustering coefficient, and would have suggested that the
low-ranking Swadesh words were not statistically favored as starting points to sample a semantic
network.6
III.
UNIVERSAL STRUCTURE: CONDITIONAL DEPENDENCE
We performed an extensive range of tests to determine whether language differences in the distribution of 1) the number of polysemies, 2) the number of meanings (unweighted node degree),
and 3) the average proximity of meanings (weighted node degree, or “strength”) are correlated
with language relatedness, or with geographic or cultural characteristics of the speaker populations, including the presence or absence of a writing system. The interpretation is analogous to
that of population-level gene frequencies in biology. Language differences that covary with relatedness disfavor the Greenbergian view of typological universals of human language, and support a
Whorfian view that most language differences are historically contingent and recur due to vertical
transmission within language families. Differences that covary with cultural or geographical parameters suggest that language structure responds to extra-linguistic conditions instead of following
universal endogenous constraints. We find no significant regression of the patterns in our degree
distribution on any cladistic, cultural, or geographical parameters. At the same time, we found
single-language degree distributions consistent with a model of random sampling (defined below),
suggesting that the degree distribution of polysemies is an instance of a Greenbergian universal.
Ruling out dummy variables of clade and culture has a second important implication for studies
5
6
Note that two non-Swadesh entries cannot be linked to each other, even if they appear in a multi-way polysemy,
because our protocol for projecting hypergraphs to simple graphs only generates links between the (Swadesh)
inputs and the outputs of bidirectional translation.
With greater resources, a bootstrapping method to extend the Swadesh list by following second- and higher-order
polysemes could provide a quantitative measure of the network position of the Swadesh entries among all related
words.
28
of this kind. We chose to collect data by hand from printed dictionaries, foregoing the sample
size and speed of the many online language resources now available, to ensure that our sample
represents the fullest variation known among languages. Online dictionaries and digital corpora
are dominated by a few languages from developed countries, with long-established writing systems
and large speaker populations, but most of these fall within a small number of European or Asian
language families. Our demonstration that relatedness does not produce a strong signal in the
parameters we have measured opens the possibility of more extensive sampling from digital sources.
We note two caveats regarding such a program, however. First, languages for this study were
selected to maximize phylogenetic distance, with no two languages being drawn from the same
genus. It is possible that patterns of polysemy could be shared among more closely related groups
of languages. Second, the strength of any phylogenetic signal might be expected to vary across
semantic domains, so any future analysis will need to be accompanied by explicit universality tests
like those performed here.
A.
Comparing semantic networks between language groups
We performed several tests to see if the structure of the polysemy network depends, in a statistically significant way, on typological features, including the presence or absence of a literary
tradition, geography, topography, and climate. The geographical information is obtained from the
LAPSyD database [14]. We choose the climate categories as the major types A (Humid), B (Arid),
and C–E (Cold) from the Köppen-Geiger climate classification [15], where C–E have been merged
since each of those had few or no languages in our sample. We list the typological features that
are tested, and the numbers of languages for each feature shown in parentheses in the table III
1.
Mantel test
Given a set S of languages, we define a weighted graph between English words as shown in
Fig. 2 in the main text. Each matrix entry Aij is the total number of foreign words, summed over
all languages in S, that can be translated or back-translated to or from both ith and j. From
this network, we find the commute distance between the vertices. The commute distance is the
expected number of steps a random walker needs to take to go from one vertex to another [16]. It
is proportional to the more commonly used resistance distance by a proportionality factor of the
sum of all resistances (inverse link weights) in the network.
29
Variable
Subset
Size
Americas
29
Eurasia
20
Geography
Africa
17
Oceania
15
Humid
38
Climate
Cold
30
Arid
13
Inland
45
Topography
Coastal
36
Some or long literary tradition 28
Literary tradition
No literary tradition
53
TABLE III. Various groups of languages based on nonlinguistic variables. For each variable we measured
the difference between the subsets’ semantic networks, defined as the tree distance between the dendrograms
of Swadesh words generated by spectral clustering.
For the subgroups of languages, the networks are often disconnected. So, we regularize them by
adding links between all vertices with a small weight of 0.1/[n ∗ (n − 1)], where n is the number of
vertices in the graph, when calculating the resistance distance. We do not include this regularization
in calculating the proportionality constant between the resistance and commute distances. Finally,
we ignore all resulting distances that are larger than n when making comparisons.
The actual comparison of the distance matrices from two graphs is done by calculating the
Pearson correlation coefficient, r, between the two. This is then compared to the null expectation of
no correlation by generating the distribution of correlation coefficients on randomizing the concepts
in one distance matrix, holding the other fixed. The Mantel test p-value, p1 , is the proportion of
this distribution that is higher than the observed correlation coefficient.
To test whether the observed correlation is typical of random language groups, we randomly
sample without replacement from available languages to form groups of the same size, and calculate
the correlation coefficient between the corresponding distances. The proportion of this distribution
that lies lower than the observed correlation coefficient provided p2 .
2.
Hierarchical clustering test
The commute measures used in the Mantel test, however, only examine the sets that are connected in the networks from the language groups. To understand the longer distance structure, we
instead look at the hierarchical classification obtained from the networks. We cluster the vertices of
the graphs, i.e., the English words, using a hierarchical spectral clustering algorithm. Specifically,
30
we assign each word i a point in Rn based on the ith components of the eigenvectors of the n × n
weighted adjacency matrix, where each eigenvector is weighted by the square of its eigenvalue.
We then cluster these points with a greedy agglomerative algorithm, which at each step merges
the pair of clusters with the smallest squared Euclidean distance between their centers of mass.
This produces a binary tree or dendrogram, where the leaves are English words, and internal nodes
correspond to groups and subgroups of words. We obtained these for all 826 English words, but
for simplicity we we show results here for the 22 Swadesh words.
Doing this where S is the set of all 81 languages produces the dendrogram shown in Fig. 8.
We applied the same approach where S is a subgroup of the 81 languages, based on nonlinguistic
variables such as geography, topography, climate, and the presence or absence of a literary tradition.
These groups are shown, along with the number of languages in each, in Table III.
For each nonlinguistic variable, we measured the difference between the semantic network for
each pair of language groups, defined as the distance between their dendrograms. We used two
standard tree metrics taken from the phylogenetic literature. The triplet distance Dtriplet [17, 18]
is the fraction of the n3 distinct triplets of words that are assigned a different topology in the two
trees: that is, such that the trees disagree as to which pair of these three words is more closely
related to each other than to the third. The Robinson-Foulds distance DRF [19] is the number
of “cuts” on which the two trees disagree, where a cut is a separation of the leaves into two sets
resulting from removing an edge of the tree.
We then performed two types of bootstrap experiments, comparing these distances to those
one would expect under the null hypotheses. First we considered the hypothesis that there is no
underlying notion of relatedness between senses—for instance, that every pair of words is equally
likely to be siblings in the dendrogram. If this were true, then the dendrograms of each pair of
groups would be no closer than if we permuted the senses on their leaves randomly (while keeping
the structure of the dendrograms the same). Comparing the actual distance between each pair of
groups to the resulting distribution gives the p-values, labeled p1 , shown in Figure 3 in the main
text. These p-values are small enough to decisively reject the null hypothesis; indeed, for most
pairs of groups the Robinson-Foulds distance is smaller than that observed in any of the 1000
bootstrap trials, making the p-value effectively zero. This gives overwhelming evidence that the
semantic network has universal aspects, applying across language groups: for instance, in every
group we tried, SEA/OCEAN and SALT are more related than either is to SUN.
In the second type of bootstrap experiment, the null hypothesis is that the nonlinguistic variables
have no effect on the semantic network, and that the differences between language groups simply
31
SAND
EARTHêSOIL
ASHHESL
DUST
FIRE
STONEêROCK
MOUNTAIN
SALT
WATER
RIVER
LAKE
SEAêOCEAN
MOON
DAYêDAYTIME
SUN
SMOKE
CLOUDHSL
NIGHT
SKY
WIND
YEAR
STAR
FIG. 8. The dendrogram of Swadesh words generated from spectral clustering on the polysemy network
taken over all 81 languages. The three largest groups are highlighted; roughly speaking, they comprise
earth-related, water-related, and sky-related concepts.
result from random sampling: for instance, that the distance between the dendrograms for the
Americas and Eurasia is what one would expect from any disjoint subsets S1 , S2 of the 81 languages
of sizes |S1 | = 29 and |S2 | = 20 respectively. To test this, we generate random pairs of disjoint
subsets with the same sizes as the groups in question, and measure the resulting distribution of
distances. The resulting p-values are labeled p2 in Table 1. These p-values are not small enough
to reject the null hypothesis. Thus, at least given the current data set, it does not appear that
these nonlinguistic variables have a statistically significant effect on the semantic network—further
supporting our thesis that it is, at least in part, universal.
For illustration, in Fig. 3 (main text) we compare the triplet distance between the dendrograms
for Americas and Oceania with the distributions from the two bootstrap experiments. These two
groups are closer than less than 2% of the trees where senses have been permuted randomly, but
38% of the random pairs of subsets of size 29 and 15 are farther away. Using a p-value of 0.05 as
the usual threshold, we can reject the hypothesis that these two groups have no semantic structure
in common; moreover, we cannot reject the hypothesis that the differences between them are due
to random sampling rather than geographic differences.
32
B.
Statistical significance
The p-values reported in Fig. 3 have to be corrected for multiple tests. Eleven independent
comparisons are performed for each of the metrics, so a low p-value is occasionally expected simply
by chance. In fact, under the null hypothesis, a column will contain a single p = 0.01 by chance
about 10% of the time. To correct for this, one can employ a Bonferroni correction [20] leading to
a significance threshold of 0.005 for each of the 11 tests, corresponding to a test size of p = 0.05.
Most of the comparisons in the p1 columns for r and DRF are comfortably are below this threshold,
implying that the networks obtained from different language groups are indeed significantly more
similar than comparable random networks.
A Bonferroni correction, however, is known to be aggressive: it controls the false positive error rate but leads to many false negatives [21], and is not appropriate for establishing the lack
of significance for the p2 columns. The composite hypothesis that none of the comparisons are
statistically significant leads to the predictions that the corresponding p-values are uniformly distributed between 0 and 1. One can, therefore, test the obtained p-values against this expected
uniform distribution. We performed a Kolmogorov-Smirnov test for uniformity for each column
of the table. This composite p-value is about 0.11 and 0.27 for the p2 columns corresponding to
Dtriplet and DRF , showing that these columns are consistent with chance fluctuations. The p-value
corresponding to the p2 column for r is about 0.03, evidence that at least one pair of networks
are more dissimilar than expected for a random grouping of languages. This is consistent with the
indication from the Bonferroni threshold as well—the comparison of Americas and Eurasia has a
significant p-value, as possibly also the comparison between Humid and Arid. Removing either of
these comparisons raises the composite p-value to 0.10, showing that such a distribution containing
one low p-value (but not two) would be expected to occur by chance about 10% of the time.
C.
Single-language graph size is a significant summary statistic
The only important language-dependent variable not attached to words in the aggregate graph
of Fig. 2 (main text), which is a strongly significant summary statistic for samples, is the total link
P
weight in the language, nL ≡ S nL
S . In the next section we will quantify the role of this variable
in producing single-language graphs as samples from the aggregate graph, conditioned on the total
weight of the language.
Whereas we do associate node degree and link weight in the aggregate graph with inherent and
33
universal aspects of human language, we cannot justify a similar interpretation for the total weight
of links within each language. The reason is that total weight — which may reflect a systematic
variation among languages in the propensity to create polysemous words — may also be affected
by reporting biases that differ among dictionaries. Different dictionary writers may be more or less
inclusive in the meaning range they report for words. Additional factors, such as the influence of
poetic traditions in languages with long written histories, may preserve archaic usages alongside
current vernacular, leading to systematic differences in the data available to the field linguist.
D.
Conclusion
By ruling out correlation and dependence on the exogenous variables we have tested, our data
are broadly consistent with a Greenbergian picture in which whatever conceptual relations underlie
polysemy are a class of typological universals. They are quantitatively captured in the node degrees
and link weights of a graph produced by simple aggregation over languages. The polysemes in
individual languages appear to be conditionally independent given the graph and a collection of
language-specific propensities toward meaning aggregation, which may reflect true differences in
language types but may also reflect systematic reporting differences.
IV.
MODEL FOR DEGREE OF POLYSEMY
A.
Aggregation of language samples
We now consider more formally the reasons sample aggregates may not simply be presumed
as summary statistics, because they entail implicit generating processes that must be tested. By
demonstrating an explicit algorithm that assigns probabilities to samples of Swadesh node degrees,
presenting significance measures consistent with the aggregate graph and the sampling algorithm,
and showing that the languages in our dataset are typical by these measures, we justify the use
and interpretation of the aggregate graph (Fig. 2 in the main text).
We begin by introducing an error measure appropriate to independent sampling from a general
mean degree distribution. We then introduce calibrated forms for this distribution that reproduce
the correct sample means as functions of both Swadesh-entry and language-weight properties.
The notion of consistency with random sampling is generally scale-dependent. In particular, the
existence of synonymous polysemy may cause individual languages to violate criteria of randomness,
but if the particular duplicated polysemes are not correlated across languages, even small groups
34
of languages may rapidly converge toward consistency with a random sample. Indeed the section
II D shows the independence of synonymous polysemy. Therefore, we do not present only a single
acceptance/rejection criterion for our dataset, but rather show the smallest groupings for which
sampling is consistent with randomness, and then demonstrate a model that reproduces the excess
but uncorrelated synonymous polysemy within individual languages.
B.
Independent sampling from the aggregate graph
Figure 2 (main text) treats all words in all languages as independent members of an unbiased
sample. To test the appropriateness of the aggregate as a summary statistic, we ask: do random
samples, with link numbers equal to those in observed languages, and with link probabilities
proportional to the weights in the aggregate graph, yield ensembles of graphs within which the
actual languages in our data are typical?
1.
Statistical tests
The appropriate summary statistic to test for typicality, in ensembles produced by random
sampling (of links or link-ends) is the Kullback-Leibler (KL) divergence of the sample counts from
the probabilities with which the samples were drawn [22, 23]. This is because the KL divergence
is the leading exponential approximation (by Stirling’s formula) to the log of the multinomial
distribution produced by Poisson sampling.
The appropriateness of a random-sampling model may be tested independently of how the
aggregate link numbers are used to generate an underlying probability model. In this section, we
will first evaluate a variety of underlying probability models under Poisson sampling, and then we
will return to tests for deviations from independent Poisson samples. We first introduce notation:
For a single language, the relative degree (link frequency), which is used as the normalization
L
L
of a probability, is denoted as pdata
S|L ≡ nS /n , and for the joint configuration of all words in all
languages, the link frequency of a single entry relative to the total N will be denoted pdata
SL ≡
L
L
data
nL
nL /N ≡ pdata
S /N = nS /n
S|L pL .
Corresponding to any of these, we may generate samples of links to define the null model for
L
a random process, which we denote n̂L
S , n̂ , etc. We will generally use samples with exactly the
same number of total links N as the data. The corresponding sample frequencies will be denoted
L sample
L
L
L
by psample
≡ n̂L
≡ n̂L
n̂ /N ≡ psample
psample
, respectively.
S /n̂ and pSL
S /N = n̂S /n̂
L
S|L
S|L
35
Finally, the calibrated model, which we define from properties of aggregated graphs, will be
the prior probability from which samples are drawn to produce p-values for the data. We denote
the model probabilities (which are used in sampling as “true” probabilities rather than sample
model
model .
frequencies) by pmodel
S|L , pSL , and pL
For nL links sampled independently from the distribution psample
for language L, the multinomial
S|L
L
probability of a particular set nS may be written, using Stirling’s formula to leading exponential
order, as
p
nL
S
|n
L
∼e
sample model
−nL D pS|L
pS|L
(4)
where the Kullback-Leibler (KL) divergence [22, 23]


sample
X
p
S|L
model
≡
psample
log  model  .
D psample
pS|L
S|L
S|L
pS|L
S
(5)
For later reference, note that the leading quadratic approximation to Eq. (5) is
2
L pmodel
1 X n̂L
−
n
S
S|L
model
≈
nL D psample
,
pS|L
S|L
model
L
2
n pS|L
S
(6)
so that the variance of fluctuations in each word is proportional to its expected frequency.
As a null model for the joint configuration over all languages in our set, if N links are drawn
, the multinomial probability of a particular set nL
independently from the distribution psample
S
SL
is given by
p
nL
S
|N ∼e
sample model
−N D pSL
pSL
(7)
where7
!
psample
SL
model
p
SL
S,L
X
model
sample
sample model
= D psample
p
+
p
D
p
p
.
L
S|L
L
L
S|L
X
model
D psample
p
≡
psample
log
SL
SL
SL
L
(8)
7
As long as we calibrate pmodel
to agree with the per-language link frequencies nL /N in the data, the data will
L
always be counted as more typical than almost-all random samples, and its probability will come entirely from the
KL divergences in the individual languages.
36
Multinomial samples of assignments n̂L
S to each of the 22 × 81 (Swadesh, Language) pairs, with
N links total drawn from distribution pL
S
null
, will produce KL divergences uniformly distributed in
the coordinate dΦ ≡ e−DKL dDKL , corresponding to the uniform increment of cumulative probability in the model distribution. We may therefore use the cumulative probability to the right of
model p
D pdata
(one-sided p-value), in the distribution of samples n̂L
SL
SL
S , as a test of consistency of
our data with the model of random sampling.
In the next two subsections we will generate and test candidates for pmodel which are different
functions of the mean link numbers on Swadesh concepts and the total links numbers in languages.
2.
Product model with intrinsic property of concepts
In general we wish to consider the consistency of joint configurations with random sampling, as
a function of an aggregation scale. To do this, we will rank-order languages by increasing nL , form
non-overlapping bins of 1, 3, or 9 languages, and test the resulting binned degree distributions
against different mean-degree and sampling models. We denote by nL the average total link
number in a bin, and by nL
S the average link number per Swadesh entry in the bin. The simplest
model, which assumes no interaction between concept and language properties, makes the model
probability pmodel
a product of its marginals. It is estimated from data without regard to binning,
SL
as
≡
pproduct
SL
nL
nS
×
.
N
N
(9)
The 22 × 81 independent mean values are thereby specified in terms of 22 + 81 sample estimators.
The KL divergence of the joint configuration of links in the actual data from this model, under
whichever binning is used, becomes
model
D pdata
p
=D
SL
SL
L L !
nS nS n
N N N
(10)
As we show in Fig. 11 below, even for 9-language bins which we expect to average over a large
amount of language-specific fluctuation, the product model is ruled out at the 1% level.
We now show that a richer model, describing interaction between word and language properties, accepts not only the 9-language aggregate, but also the 3-language aggregate with a small
adjustment of the language size to which words respond (to produce consistency with word-size
and language-size marginals). Only fluctuation statistics at the level of the joint configuration of
Swadeshes
37
Languages
product
, nsample
in accordance with Fig. 4S (f). The colors denote
FIG. 9. Plots for the data nL
S , N pSL
SL
corresponding numbers of the scale. The original data in the first panel with the sample in the last panel
seems to agree reasonably well.
81 individual languages remains strongly excluded by the null model of random sampling.
3.
Product model with saturation
An inspection of the deviations of our data from the product model shows that the initial
propensity of a word to participate in polysemies, as inferred in languages where that word has
few links, in general overestimates the number of links (degree). Put it differently, languages seem
to place limits on the weight of single polysemies, favoring distribution over distinct polysemies,
but the number of potential distinct polysemies is an independent parameter from the likelihood
that the available polysemies will be formed. Interpreted in terms of our supposed semantic space,
the proximity of target words to a Swadesh entry may determine the likelihood that they will be
polysemous with it, but the total number of proximal targets may vary independently of their
absolute proximity. These limits on the number of neighbors of each concept are captured by
additional 22 parameters.
To accommodate such characteristic, we revise the model Eq. (9) to the following function:
AS nL
.
BS + hnL i
38
25
40
20
nLS=MOON
nLS=WATER
30
20
10
15
10
5
0
0
0
20
40
60
0
20
<nL>bin
40
60
<nL>bin
FIG. 10. Plots of the saturating function (11) with the parameters given
inTable IV, compared to nL
S
(ordinate) in 9-language bins (to increase sample size), versus bin-averages nL (abscissa). Red line is drawn
through data values, blue is the product model, and green is the saturation model. WATER requires no
significant deviation from the product model (BWATER /N 20), while MOON shows the lowest saturation
value among the Swadesh entries, at BMOON ≈ 3.4.
where degree numbers nL
S for each Swadesh S is proportional to AS and language size, but is
bounded by BS , the number of proximal concepts. The corresponding model probability for each
language then becomes
psat
SL =
p̃S pdata
(AS /BS )(nL /N )
L
≡
.
L
1 + n /BS
1 + pdata
L N/BS
(11)
As all BS /N → ∞ we recover the product model, with pdata
≡ nL /N and p̃S → nS /N .
L
A first-level approximation to fit parameters AS and BS is given by minimizing the weighted
mean-square error
E≡
X
L
!2
AS n L
1 X L
nS −
.
hnL i
BS + hnL i
(12)
S
The function (12) assigns equal penalty to squared error within each language bin ∼ nL , proportional to the variance expected from Poisson sampling. The fit values obtained for AS and BS do
not depend sensitively on the size of bins except for the Swadesh entry MOON in the case where all
81 single-language bins are used. MOON has so few polysemies, but the MOON/month polysemy
is so likely to be found, that the language Itelman, with only one link, has this polysemy. This point
leads to instabilities in fitting BMOON in single-language bins. For bins of size 3–9 the instability
is removed. Representative fit parameters across this range are shown in Table IV. Examples of
the saturation model for two words, plotted against the 9-language binned degree data in Fig. 10,
39
Meaning category Saturation: BS Propensity p̃S
STAR
1234.2
0.025
SUN
25.0
0.126
YEAR
1234.2
0.021
SKY
1234.2
0.080
SEA/OCEAN
1234.2
0.026
STONE/ROCK
1234.2
0.041
MOUNTAIN
1085.9
0.049
DAY/DAYTIME
195.7
0.087
SAND
1234.2
0.026
ASH(ES)
13.8
0.068
SALT
1234.2
0.007
FIRE
1234.2
0.065
SMOKE
1234.2
0.031
NIGHT
89.3
0.034
DUST
246.8
0.065
RIVER
336.8
0.048
WATER
1234.2
0.073
LAKE
1234.2
0.047
MOON
1.2
0.997
EARTH/SOIL
1234.2
0.116
CLOUD(S)
53.4
0.033
WIND
1234.2
0.051
TABLE IV. A table of fitted values of parameters BS and p̃S for the saturation model of Eq. (11) . The
saturation value BS is an asymptotic number of meanings associated with the entry S, and the propensity
p̃S is a rate at which the number of polysemies increases with nL at low nL
S.
show the range of behaviors spanned by Swadesh entries.
The least-squares fits to AS and BS do not directly yield a probability model consistent with the
marginals for language size that, in our data, are fixed parameters rather than sample variables
P
to be explained. They closely approximate the marginal N L psat
SL ≈ nS (deviations < 1 link
P sat
for every S) but lead to mild violations N S pSL 6= nL . We corrected for this by altering the
saturation model to suppose that, rather than word properties’ interacting with the exact value
nL , they interact with a (word-independent but language-dependent) multiplier 1 + ϕL nL , so
that the model for nL
S in each language becomes becomes
AS 1 + ϕL nL
,
BS + (1 + ϕL ) nL
in terms of the least-squares coefficients AS and BS of Table IV. The values of ϕL are solved with
40
500
450
400
Histogram counts
350
300
250
200
150
100
50
0
0.04
0.045
0.05
0.055
0.06
0.065
0.07
0.075
0.08
DKL(samples || saturation model)
FIG. 11. Kullback-Leibler divergence of link frequencies in our data, grouped into non-overlapping 9language bins ordered by rank, from the product distribution (9) and the saturation model (11). Parameters
AS and BS have been adjusted (as explained in the text) to match the word- and language-marginals. From
10,000 random samples n̂L
S , (green) histogram for the product model; (blue) histogram for the saturation
model; (red dots) data. The product model rejects the 9-language joint binned configuration at the at
1% level (dark shading), while the saturation model is typical of the same configuration at ∼ 59% (light
shading).
Newton’s method to produce N
P
S
L
psat
SL → n , and we checked that they preserve N
sat
L pSL
P
≈ nS
within small fractions of a link. The resulting adjustment parameters are plotted versus nL for
individual languages in Fig. 12. Although they were computed individually for each L, they form
a smooth function of nL , possibly suggesting a refinement of the product model, but also perhaps
reflecting systematic interaction of small-language degree distributions with the error function (12).
0.2
0.1
0
ϕL
-0.1
-0.2
-0.3
-0.4
-0.5
0
10
20
30
40
50
60
70
nL
FIG. 12. Plot of the correction factor ϕL versus nL for individual languages in the probability model used
in text, with parameters BS and p̃S shown in Table IV. Although ϕL values were individually solved with
Newton’s method to ensure that the probability model matched the whole-language link values, the resulting
correction factors are a smooth function of nL .
41
200
180
160
Histogram counts
140
120
100
80
60
40
20
0
0.15
0.16
0.17
0.18
0.19
0.2
0.21
0.22
0.23
DKL(samples || saturation model)
FIG. 13. The same model parameters as in Fig. 11 is now marginally plausible for the joint configuration
of 27 three-language bins in the data, at the 7% level (light shading). For reference, this fine-grained joint
configuration rejects the null model of independent sampling from the product model at p-value ≈ 10−3
(dark shading in the extreme tail). 4000 samples were used to generate this test distribution. The blue
histogram is for the saturation model, the green histogram for the product model, and the red dots are
generated data.
With the resulting joint distribution psat
SL , tests of the joint degree counts in our dataset for
consistency with multinomial sampling in 9 nine-language bins are shown in Fig. 11, and results of
tests using 27 three-language bins are shown in Fig. 13. Binning nine languages clearly averages
over enough language-specific variation to make the data strongly typical of a random sample
(P ∼ 59%), while the product model (which also preserves marginals) is excluded at the 1%
level. The marginal acceptance of the data even for the joint configuration of three-language bins
(P ∼ 7%) suggests that language size nL is an excellent explanatory variable and that residual
language variations cancel to good approximation even in small aggregations.
C.
Single instances as to aggregate representation
The preceding subsection showed intermediate scales of aggregation of our language data are
sufficiently random that they can be used to refine probability models for mean degree as a function of parameters in the globally-aggregated graph. The saturation model, with data-consistent
marginals and multinomial sampling, is weakly plausible by bins of as few as three languages.
Down to this scale, we have therefore not been able to show a requirement for deviations from
the independent sampling of links entailed by the use of the aggregate graph as a summary statistic. However, we were unable to find a further refinement of the mean distribution that would
42
reproduce the properties of single language samples. In this section we characterize the nature
of their deviation from independent samples of the saturation model, show that it may be reproduced by models of non-independent (clumpy) link sampling, and suggest that these reflect excess
synonymous polysemy.
1.
Power tests and uneven distribution of single-language p-values
To evaluate the contribution of individual languages versus language aggregates to the acceptance or rejection of random-sampling models, we computed p-values for individual languages or
language bins, using the KL-divergence (5). A plot of the single-language p-values for both the
null (product) model and the saturation model is shown in Fig. 14. Histograms for both single
languages (from the values in Fig. 14) and aggregate samples formed by binning consecutive groups
of three languages are shown in Fig. 15.
For samples from a random model, p-values would be uniformly distributed in the unit interval,
and histogram counts would have a multinomial distribution with single-bin fluctuations depending
on the total sample size and bin width. Therefore, Fig. 15 provides a power test of our summary
statistics. The variance of the multinomial may be estimated from the large-p-value body where
the distribution is roughly uniform, and the excess of counts in the small-p-value tail, more than
one standard deviation above the mean, provides an estimate of the number of languages that can
be confidently said to violate the random-sampling model.
From the upper panel of Fig. 15, with a total sample of 81 languages, we can estimate a number
of ∼ 0.05 × 81 ≈ 4–5 excess languages at the lowest p-values of 0.05 and 0.1, with an additional
2–3 languages rejected by the product model in the range p-value ∼ 0.2. Comparable plots in
Fig. 15 (lower panel) for the 27 three-language aggregate distributions are marginally consistent
with random sampling for the saturation model, as expected from Fig. 13 above. We will show in
the next section that a more systematic trend in language fluctuations with size provides evidence
that the cause for these rejections is excess variance due to repeated attachment of links to a subset
of nodes.
43
0
−0.5
−1
log10(P)
−1.5
−2
−2.5
−3
−3.5
−4
0
10
20
30
40
50
60
70
80
90
Language (rank)
FIG. 14. log10 (p−value) by KL divergence, relative to 4000 random samples per language, plotted versus
language rank in order of increasing nL . Product model (green) shows equal or lower p-values for almost
all languages than the saturation model (blue). Three languages – Basque, Haida, and Yorùbá – had value
p = 0 consistently across samples in both models, and are removed from subsequent regression estimates.
A trend toward decreasing p is seen with increase in nL .
2.
Excess fluctuations in degree of polysemy
If we define the size-weighted relative variance of a language analogously to the error term in
Eq. (12), as
σ2
L
≡
2
1 X L
L model
n
−
n
p
,
S
S|L
nL
(13)
S
Fig. 16 shows that − log10 (p−value) has high rank correlation with σ 2
L
and a roughly linear
regression over most of the range.8 Two languages (Itelmen and Hindi), which appear as large
outliers relative to the product model, are within the main dispersion in the saturation model,
showing that their discrepancy is corrected in the mean link number. We may therefore understand
a large fraction of the improbability of languages as resulting from excess fluctuations of their degree
numbers relative to the expectation from Poisson sampling.
Fig. 17 then shows the relative variance from the saturation model, plotted versus total average
link number for both individual languages and three-language bins. The binned languages show no
significant regression of relative variance away from the value unity for Poisson sampling, whereas
single languages show a systematic trend toward larger variance in larger languages, a pattern that
8
Recall from Eq. (6) that the leading quadratic term in the KL-divergence differs from σ 2
L
in that it presumes
Poisson fluctuation with variance nL pmodel
at the level of each word, rather than uniform variance ∝ nL across
S|L
all words in a language. The relative variance is thus a less specific error measure.
44
0.35
Unambiguous excess
low-P samples
Fraction of languages with probability P
0.3
0.25
0.2
0.15
0.1
0.05
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0.7
0.8
0.9
1
P (bin center)
0.35
Fraction of languages with probability P
0.3
0.25
0.2
0.15
0.1
0.05
0
0
0.1
0.2
0.3
0.4
0.5
0.6
P (bin center)
FIG. 15. (Upper panel) Normalized histogram of p-values from the 81 languages plotted in Fig. 14. The
saturation model (blue) produces a fraction ∼ 0.05 × 81 ≈ 4–5 languages in the lowest p-values {0.05, 0.1}
above the roughly-uniform background for the rest of the interval (shaded area with dashed boundary). A
further excess of 2–3 languages with p-values in the range [0, 0.2] for the product model (green) reflects the
part of the mismatch corrected through mean values in the saturation model. (Lower panel) Corresponding
histogram of p-values for 27 three-language aggregate degree distributions. Saturation model (blue) is now
marginally consistent with a uniform distribution, while the product model (green) still shows slight excess
of low-p bins. Coarse histogram bins have been used in both panels to compensate for small sample numbers
in the lower panel, while producing comparable histograms.
we will show is consistent with “clumpy” sampling of a subset of nodes. The disappearance of
this clumping in binned distributions shows that the clumps are uncorrelated among languages at
similar nL .
45
3
2.5
Hindi
- log10(P)
2
1.5
1
0.5
Itelmen
0
0
0.5
1
1.5
2
2.5
3
(σ2)L
3.5
3
- log10(P)
2.5
2
1.5
1
0.5
0
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
(σ2)L
L
FIG. 16. (Upper panel:) − log10 (P ) plotted versus relative variance σ 2 from Eq. (13) for the 78 languages
with non-zero p-values from Fig. 14. (blue) saturation model; (green) product model. Two languages
(circled) which appear as outliers with anomalously small relative variance in the product model—Itelman
and Hindi—disappear into the central tendency with the saturation model. (Lower panel:) an equivalent
plot for 26 three-language bins. Notably, the apparent separation of individual large-nL languages into two
L
groups has vanished under binning, and a unimodal and smooth dependence of − log10 (P ) on σ 2 is seen.
3.
Correlated link assignments
We may retain the mean degree distributions, while introducing a systematic trend of relative
variance with nL , by modifying our sampling model away from strict Poisson sampling to introduce
“clumps” of links. To remain within the use of minimal models, we modify the sampling procedure
by a single parameter which is independent of word S, language-size nL , or particular language L.
We introduce the sampling model as a function of two parameters, and show that one function
of these is constrained by the regression of excess variance. (The other may take any interior value,
46
3
(σ2)L = 0.011938 nL + 0.87167
(σ2)L = 0.0023919 nL + 0.94178
2.5
(σ2)L
2
1.5
1
0.5
0
0
10
20
30
40
50
60
70
nL
FIG. 17. Relative variance from the saturation model versus total link number nL for 78 languages excluding
Basque, Haida, and Yorùbá. Least-squares regression are shown for three-language bins (green) and individual languages (blue), with regression coefficients inset. Three-language bins are consistent with Poisson
sampling at all nL , whereas single languages show systematic increase of relative variance with increasing
nL .
so we have an equivalence class of models.) In each language, select a number B of Swadesh entries
randomly. Let the Swadesh indices be denoted {Sβ }β∈1,...B . We will take some fraction of the total
links in that language, and assign them only to the Swadeshes whose indices are in this privileged
set. Introduce a parameter q that will determine that fraction.
We require correlated link assignments be consistent with the mean determined by our model
fit, since binning of data has shown no systematic effect on mean parameters. Therefore, for the
random choice {Sβ }β∈1,...B , introduce the normalized density on the privileged links
pmodel
S|L
πS|L ≡ PB
model
β=1 pSβ |L
(14)
if S ∈ {Sβ }β∈1,...B and πS|L = 0 otherwise. Denote the aggregated weight of the links in the
privileged set by
W ≡
B
X
pSβ |L .
(15)
β=1
Then introduce a modified probability distribution based on the randomly selected links, in the
47
form
p̃S|L ≡ (1 − qW ) pS|L + qW πS|L .
(16)
Multinomial sampling of nL links from the distribution p̃S|L will produce a size-dependent variance
of the kind we see in the data. The expected degrees given any particular set {Sβ } will not agree
with the means in the aggregate graph, but the ensemble mean over random samples of languages
will equal pS|L , and binned groups of languages will converge toward it according to the central-limit
theorem.
The proof that the relative variance increases linearly in nL comes from the expansion of the
expectation of Eq. (13) for random samples, denoted
D
σ̂
2 L
E
*
≡
2
1 X L
L model
n̂
−
n
p
S
S|L
nL
+
S
*
i2
1 Xh L
L
L
model
=
n̂
−
n
p̃
+
n
p̃
−
p
S|L
S|L
S
S|L
nL
S
*
+
X
1
2
L
=
n̂L
+
S − n p̃S|L
nL
S
*
+
2
X
model
L
p̃S|L − pS|L
.
n
+
(17)
S
The first expectation over n̂L
S is constant (of order unity) for Poisson samples, and the second
expectation (over the sets {Sβ } that generate p̃S|L ) does not depend on nL except in the prefactor.
Cross-terms vanish because link samples are not correlated with samples of {Sβ }. Both terms in
the third line of Eq. (17) scale under binning as (bin-size)0 . The first term is invariant due to
Poisson sampling, while in the second term, the central-limit theorem reduction of the variance in
samples over p̃S|L cancels growth in the prefactor nL due to aggregation.
Because the linear term in Eq. (17) does not systematically change under binning, we interpret
the vanishing of the regression for three-language bins in Fig. 17 as a consequence of fitting of the
mean value to binned data as sample estimators.9 Therefore, we require to choose parameters B
and q so that regression coefficients in the data are typical in the model of clumpy sampling, while
regressions including zero have non-vanishing weight in models of three-bin aggregations.
Fig. 18 compares the range of regression coefficients obtained for random samples of languages
9
We have verified this by generating random samples from the model (17), fitting a saturation model to binned
sample configurations using the same algorithms as we applied to our data, and then performing regressions
equivalent to those in Fig. 17. In about 1/3 of cases the fitted model showed regression coefficients consistent with
zero for three-language bins. The typical behavior when such models were fit to random sample data was that the
three-bin regression coefficient decreased from the single-language regression by ∼ 1/3.
48
with the values nL in our data, from either the original saturation model psat
S|L , or the clumpy
model p̃S|L randomly re-sampled for each language in the joint configuration. Parameters used
were (B = 7, q = 0.975).10 With these parameters, ∼ 1/3 of links were assigned in excess to ∼ 1/3
of words, with the remaining 2/3 of links assigned according to the mean distribution.
7
x 10
5
6
5
4
3
2
1
0
−0.02
−0.01
0
0.01
0.02
0.03
0.04
0.05
FIG. 18. Histograms of regression coefficients for language link samples n̂L
either generated by Poisson
S
sampling from the saturation model pmodel
fitted
to
the
data
(blue),
or
drawn
from clumped probabilities
S|L
p̃S|L defined in Eq. (16), with the set of privileged words {Sβ } independently drawn for each language
(green). Solid lines refer to joint configurations of 78 individual languages with the nL values in Fig. 17.
Dashed lines are 26 non-overlapping three-language bins.
The important features of the graph are: 1) Binning does not change the mean regression
coefficient, verifying that Eq. (17) scales homogeneously as (bin-size)0 . However, the variance for
binned data increases due to reduced number of sample points; 2) the observed regression slope
0.012 seen in the data is far out of the support of multinomial sampling from psat
S|L , whereas with
these parameters, it becomes typical under p̃S|L while still leaving significant probability for the
three-language binned regression around zero (even without ex-post fitting).
[1] Brown, C. H., General principles of human anatomical partonomy and speculations on the growth of
partonomic nomenclature. Am. Ethnol. 3, 400-424 (1976).
[2] Brown, C. H., A theory of lexical change (with examples from folk biology, human anatomical partonomy and other domains). Anthropol. Linguist. 21, 257-276 (1979).
10
Solutions consistent with the regression in the data may be found for B ranging from 3–17. B = 7 was chosen as
an intermediate value, consistent with the typical numbers of nodes appearing in our samples by inspection.
49
[3] Brown, C. H. & Witkowski, S. R., Figurative language in a universalist perspective. Am. Ethnol. 8
596-615 (1981).
[4] Witkowski, S. R., & Brown, C. H., Lexical universals. Ann. Rev. of Anthropol. 7 427-51 1978).
[5] Millar, R. M., Trask’s historical linguistics (Oxford University Press, London, 2007)
[6] Dryer, M. S. (1989) Large linguistic areas and language sampling. Studies in Language 13 257–292.
[7] Dryer, M. S. (2013) Genealogical language list. World Atlas of Language Structures Online, ed. M.S.
Dryer and M. Haspelmath. (Available online at http://wals.info, Accessed on 2015-10-15.)
[8] Haspelmath, M., Dryer, M., Gil, D., & Comrie, B., The World Atlas of Language Structures (Book with
interactive CD-ROM (Oxford University Press, Oxford, 2005).
[9] Albert, Réka and Barabási, Albert-László, Statistical mechanics of complex networks, Rev. Mod. Phys.
74 1 47–97 (2002).
[10] Berge, Claude, Graphs and hypergraphs (North-Holland, Amsterdam, 1973).
[11] Luo, Bin and Wilson, Richard C. and Hancock, Edwin R., Spectral embedding of graphs, Pattern
recognition 36, 2213–2230 (2003).
[12] Von Luxburg, Ulrike, A Tutorial on Spectral Clustering, Statistics and Computing, 17, 395–416 (2007).
[13] Greenberg, Joseph H, Universals of language (MIT Press, Cambridge, MA, 1966).
[14] Lyon-Albuquerque Phonological Systems Database, http://www.lapsyd.ddl.ish-lyon.cnrs.fr/.
[15] Kottek, M, Grieser, J, Beck, C, Rudolf, B, and Rubel F (2006). World map of the Köppen-Geiger
climate classification updated. Meteorologische Zeitschrift 15 259-263.
[16] Chandra AK, Raghavan P, Ruzzo WL, Smolensy R and Tiwari P (1996) The electrical resistance of a
graph captures its commute and cover times Computational Complexity 6(4) 312–340.
[17] Dobson, A. J., Comparing the Shapes of Trees, Combinatorial mathematics III, (Springer-Verlag, New
York 1975).
[18] Critchlow, D. E., Pearl, D. K., & Qian, C. L., The triples distance for rooted bifurcating phylogenetic
trees. Syst. Biol. 45, 323–334 (1996).
[19] Robinson, D. F., & Foulds, L. R., Comparison of phylogenetic trees. Math. Biosci. 53, 131–147 (1981).
[20] Bonferroni, CE (1936) Teoria statistica delle classi e calcolo delle probabilità. Pubblicazioni del R
Instituto Superiore di Scienze Economiche e Commerciali di Firenze 8, 3–62.
[21] Pemeger, TV (1998), What’s wrong with Bonferroni adjustments, Brit. Med. J. 316, 1236–1238.
[22] Kullback, S and Leibler RA (1951) On information and sufficiency. Annals of Mathematical Statistics
22 79-86. doi:10.1214/aoms/1177729694.
[23] Cover, Thomas M. and Thomas, Joy A., Elements of Information Theory, (Wiley, New York, 1991).

Similar documents

×

Report this document