It’s common enough for people to think about languages in terms of relative complexity. I often hear people claim that a language—not infrequently their own language, or a language which they are learning—is particularly complex and difficult to learn due to its large vocabulary, morphological irregularities, or tricky pronunciation. It does seem intuitively obvious that some languages must just be more complex than others. Yet one of the first propositions that many undergrads are exposed to when they begin to study linguistics is that this is actually a myth.
A key tenet of formal linguistics and sociolinguistics for much of the 20th century was that of equicomplexity. This is the idea that all languages are equally effective and powerful means of communication, and, by somewhat shaky extension, that all languages are equally complex. Equicomplexity arose not really from any data-driven research, but from ideological discussions around prescriptivism and descripitivism. You’ll remember from an earlier post on this blog (http://www.icge.co.uk/languagesciencesblog/?p=25) that prescriptivism describes the position of believing that there is a ‘correct’ way to speak, and that to speak in other ways is somehow deficient, while descriptivism is an attitude of open interest towards the ways in which language is used without attaching any value judgements to them. Linguistics—particularly sociolinguistics—holds descriptivism as a core component of its approach, yet throughout much of history prescriptivism has been the mainstream viewpoint.
The—in many ways still largely unsuccessful—battle against prescriptivismhas perhaps necessitated holding simple, powerful ideological positions. Faced with educators who believe that the varieties spoken by their non-white or working class pupils are intrinsically inferior to the standard (calling them ‘illogical’, ‘crude’, ‘rough’, ‘ugly’ or just ‘incorrect’), there seems to be little space to have a sophisticated conversation about the nature of complexity and expressive power. Such views are clearly proxies for racism and classism and serve to perpetuate the grievous structural inequalities that typify western societies. They are best battled with clear maxims, cleanly expressed: All languages are equally powerful tools of communication. All languages are equally deserving of respect. There is no such thing as a simple language.
So, it’s obvious that equicomplexity took its place in the canon of linguistic assumptions for good reason. However, in recent years and not without controversy, scholars have begun to unpick it. Few linguists would argue with the fundamental ideological position underlying the statement that ‘all [natively learned] languages are equally powerful means of communication’, but many have begun to question the leap to the idea that all languages must therefore be equally complex.
It’s clear that in anyparticular area of grammar, languages can be more or less complex. So, English, with two distinct surface forms of each regular noun, is obviously simpler in this respect than Finnish, with perhaps 26. Mandarin, which distinguishes between 19 and 26 different consonants (depending on how you count it), is clearly more complicated in this respect than New Zealand Māori with 10 consonants but less complicated than Adyghe, with over 50. Given this, to maintain that all languages areequally complex overall, one must assume that when one area of grammar gets more complicated, others get more simple to compensate. This has been the implicit assumption underlying equicomplexity for several decades.
The problem is, it turns out that this just isn’t true. If this were true, then whatever our measure of complexity is (—and that’s a whole nother blog post) we should find that in a big sample of languages there is a negative correlation between complexity in one area of grammar and complexity in another. Yet in reality, studies like Maddieson (2006; 2007) and Shosted (2006) show, if anything, a weak positive correlation between complexity in different areas of grammar: languages with more complicated phonology are more, not less, likely to have complicated morphology.
So where does that leave equicomplexity? Well, if we accept these findings then we pretty much have to abandon the idea that all languages are equally complex. It was never backed up by evidence in the first place, and these findings seem to represent some pretty conclusive counter-evidence. It doesn’t, of course, mean that we should abandon the claims that all natively-learned languages are equally powerful means of communication and that all languages are equally deserving of respect. These remain important ideological positions. However, if we can reject canonical equicomplexity, lots of exciting new avenues of research open up to us: Why are some languages more complex than others? How much of language complexity is built into the innate language faculty, and how much is cultural elaboration? What social conditions cause languages to become simpler and what cause them to become more complex? It’s in this latter area that my own research is focused.
A pertinent addendum to all of this has to do with the nature and experience of complexity. When, as I mentioned at the beginning, I hear people talking about how complicated different languages are, they’re almost always interested in the point of view of adult learners. They’re interested in whether they will have to put in more or less effort to learn another language, and in how much effort non-native speakers of their own language have had to make.
The reality is that this ‘ease of learning’ is only partially related to ‘complexity’ in the abstract. The biggest factor which will make another language easy or difficult to learn is not complexity but how closely related it is to your own native language(s) and any other languages you speak. Native speakers of English will find Norwegian or French extremely easy to learn, as (for different reasons) they each share a great deal of vocabulary and structural similarities with English; native speakers of Cantonese may not. Native speakers of languages which do not distinguish tones (e.g. most—though not all—European languages) may find particular difficulty in learning languages which do (most languages of subsaharan Africa, the Chinese languages and related languages, as well as many others).
Having taken this into account, then, yes, morphological and phonological complexity will tend to make for a harder learning process. There is simply a lot more verbal morphology to memorise for a student of Spanish than for a student of Mandarin, and this will take time. Similarly a learner of Hawai’ian won’t have to spend very much energy at all on learning the different consonants they need to be able to pronounce compared with a learner of Halkomelem or another Salishan language, and a student of Danish must learn to distinguish far more vowel qualities than a student of Standard Arabic.
At the end, we have a rather mixed picture. Clearly, in descriptive, neutral terms, some languages are much more complex than others. From a practical point of view for most users of language, though, this has little real relevance. Their experience of language complexity will mostly come down to their own language backgrounds—and even where it doesn’t, it will always be possible to identify particularly complex structures and features of some sort in any language.
Maddieson, Ian. 2006. Correlating phonological complexity: Data and validation. Linguistic Typology 10. 106–123. doi:10.1515/LINGTY.2006.004.
Maddieson, Ian. 2007. Issues of phonological complexity: Statistical analysis of the relationship between syllable structures, segment inventories and tone contrasts. In M.-J. Solé, P. Beddor & M. Ohala (eds.), Experimental Approaches to Phonology, 93–103. Oxford: Oxford University Press.
Shosted, Ryan K. 2006. Correlating complexity: A typological approach. Linguistic Typology 10. 1–40. doi:10.1515/LINGTY.2006.001.
Attention: This article contains Chinese text. Without proper rendering support, you may see question marks, boxes, or other symbols instead of Chinese characters.
It all began with a question while I was in a cab from the Cambridge railway station to my college. The driver, after asking where I come from and what my field of study is, asked me a quite simple yet difficult question that kept me busy for the rest of my trip: “so, how many languages are there in China?”
Most people I have met, even Chinese people themselves, do not have a clear idea about the linguistic situation and diversity in China. After all, there is a language named after the country, the so-called “Chinese language”, which is also the lingua franca in China. This description, however, is far from accurate with regards to the real situation of languages spoken in China – China is not a monolingual country, although it is monolingual in some areas. The definition of Chinese language is more complicated than you can imagine, even though everyone knows that the national language of China is called “Standard Chinese”.
In this post, I focus on several myths about the languages in China, and show that neither “Chinese language” nor “languages in China” are simple concepts.
How many languages are there in China?
There are 298 languages in total, currently spoken by native people in China; some languages are national and regional lingua francas with millions and billions of speakers, while some languages are used by only a few thousands of people in small counties (Lewis, Simons and Fennig, 2014). This number does not include those languages spoken by immigrants, such as English, Arabic or Yoruba; however, it does include some languages that are spoken by ethnic minorities in China which are official languages of other countries, such as Russian, Uzbek and Korean. (There are ethnic minorities of Russian, Uzbek and Korean origins in China whose native languages are recognised among the languages of China.)
Do all the languages in China use Chinese characters?
This is definitely not the case; or, to be more precise, the Chinese language is the only language that uses Chinese characters nowadays. Most of the commonly used languages in China have their own written forms, like Tibetan, Mongolian and Uyghur (using Arabic alphabets); some languages like Zhuang once used Chinese characters for documentation, but Chinese characters have gradually been replaced by Latin characters.
Is there an official language of China?
China does not have a confirmed “official language” – I have double checked the Constitution but there is not a single article with regards to the issue of the official language of the country. However, China does have a standard language: according to Article 2 of Law of the People’s Republic of China on the Standard Spoken and Written Chinese Language (2000), the spoken form of standard Chinese language is Putonghua and the written form should be in Standardised Chinese Character.
In actual use, however, the language policy is more flexible; especially in the areas where ethnic minorities reside, languages other than Standard Chinese are used in both informal and institutional contexts. A good example comes from Renminbi, the currency of China: If we carefully examine a bank note, we will find that it is more similar to Swiss Franc than to Pound Sterling – it is multilingual. A number of languages appear on the note: Chinese (in the form of pinyin), Mongolian, Tibetan, Uyghur and Zhuang. Apart from Chinese, the other four languages are important minority languages in China, and some of them have obtained institutional status in the provinces they are mostly spoken; for instance, Tibetan is an official language in Tibet, part of Qinghai and some areas in Gansu.
So what is “Chinese language”?
The term “Chinese language”, or Hanyu (汉语), is a loosely defined concept. In linguistics, the name refers to a group of linguistic varieties that come from one single ancient origin; the vocabulary and sentential structure of these varieties is generally the same. In general, these linguistic varieties can be classified into seven large subgroups: Mandarin, Wu, Yue (Cantonese), Min, Gan, Xiang, Kejia (Hakka). Here is a family tree of the Chinese languages proposed by You (2000), showing the history and development of these different subgroups.
Due to geographical factors, some varieties of the Chinese language have been isolated from others, and this isolation has led to changes in the way these varieties sound; for example, a native speaker of Shaoxing Chinese may find episodes of TV series in Wenzhou Chinese difficult to follow, if she watches them without subtitles, although the distance between the two cities is only a bit more than 300 km (which is a rather short distance for Chinese standards). This phenomenon is quite common in Southern China, and is called “different pronunciations within five kilometers”.
In traditional linguistic research on Chinese language, these subgroups are labelled “dialects of Chinese language”. I prefer to avoid the term “dialect” because it is not the case that all these linguistic varieties are mutually intelligible, which is the criterion that some Western sociolinguists might use to define “dialects” of the same language.
So you mean we can’t contrast “Chinese” with “Cantonese”?
Yes, this is indeed the case. Cantonese is a member of the Chinese language group, so it is a branch of the Chinese language; it does not make sense to say “I can speak Chinese and Cantonese” – to Chinese people this sounds equivalent to “I can speak English and London English”. However, we can still contrast “Mandarin” and “Cantonese”, or “Standard Chinese” and “Cantonese”, because these terms refer to different varieties of the Chinese language.
But what is Mandarin Chinese? Is there any difference between Mandarin and Putonghua?
Mandarin is a subgroup of the Chinese language that is widely spoken in Northern and South-western China; in Chinese, we call it Guanhua (官话), which means “the (Chinese) language spoken by officials”. Varieties of Mandarin do not have a unified pronunciation, but usually native speakers of different varieties of Mandarin can roughly understand each other.
The spoken form of contemporary standard Chinese is Putonghua, whose phonological system is based on Northern Mandarin, and, more specifically, on the varieties spoken in and around Beijing. A simple way to describe the relationship between Mandarin and Putonghua is that Putonghua is a member of the Mandarin group of languages, while Mandarin is a member of the group of Chinese languages. Nowadays, Putonghua is the most representative form of the Chinese language, and when we talk about “learning to speak Chinese”, we always refer to Putonghua.
This was only a sample of the questions that I have been asked to answer over the years, being both a linguistics student and Chinese. I could go on about the languages in China for hours, but I’m afraid I should stop here due to space and time limitations. If you are interested in learning more about the development and categorisation of varieties of the Chinese language, I sincerely recommend Jerry Norman’s Chinese – it is a wonderful introduction to this ancient and beautiful language which will be interesting even for speakers of ‘Chinese languages’ themselves.
Lewis, M. Paul, Gary F. Simons, and Charles D. Fennig (eds.). (2014). Ethnologue: Languages of the World, Seventeenth edition. Dallas, Texas: SIL International. Online version: http://www.ethnologue.com.
Norman, J. (1988). Chinese. Cambridge: Cambridge University Press.
The Law of the People’s Republic of China on the Standard Spoken and Written Chinese Language. 2000. The People’s Republic of China.
You, R. (2000). Chinese Dialectology. Shanghai: Shanghai Education Publishing.
Me again, with more stuff about relative clauses! In my defence, I have been working on reconstruction in relative clauses quite a bit recently, so this represents one way of desaturating my brain. That is not to imply that it is a tedious topic – far from it. Reconstruction effects in relative clauses give us a fascinating clue about how these constructions are built and how our interpretive faculties ‘read’ such structures. I have tried to avoid technicalities and jargon as much as possible, and to keep this blog entry a reasonable length whilst also getting to the core of some very deep questions in current syntactic theory. So, let’s get started.
We’ll start by considering the following data (if two elements have the same subscript, it means that the two elements refer to the same individual; if the subscripts are different, the elements refer to different individuals. The * means that the sentence is ungrammatical).
(1) a. Samx likes the picture of himselfx.
b. *Samx likes the picture of himx.
c. Samx thinks that Rosie likes the picture of himx.
In (1a), himself must refer to Sam. In (1b), him must not refer to Sam but must refer to some other singular male individual (some speakers find (1b) acceptable (Reinhart & Reuland 1993), but I and most other people I have asked do not). (1c) is ambiguous: him can either refer to Sam (as shown by the subscripts) or to some other singular male individual. The pattern in (1) is traditionally captured by the Binding Conditions (Conditions A and B to be more precise) (Chomsky, 1981). The Binding Conditions are quite technical so I won’t go into them here. What is important is the pattern in (1).
What happens if we relativise picture of X, i.e. modify picture of X with a relative clause?
(2) a. The picture of himselfx that Samx likes is quite flattering.
b. ?/*The picture of himx that Samx likes is quite flattering.
c. The picture of himx that Samx thinks that Rosie likes is quite flattering.
As we can see, the pattern in (2) is exactly the same as in (1). This suggests that we are interpreting the head of the relative clause, i.e. picture of himself, in the object position of like, since then (2) can be interpreted in the same way as (1). This in turn suggests that the head of the relative clause originated inside the relative clause and was moved to the position in which it is pronounced. However, when it comes to interpreting (rather than pronouncing) the structure, we ‘reconstruct’ the movement and interpret the head of the relative clause in its original position (see Bianchi, 1999; Kayne, 1994; Schachter, 1973; Vergnaud, 1974). For example, (2a) is interpreted as (3), where the bold copy is the one being interpreted. Note that this bold copy is not pronounced.
(3) The picture of himselfx that Samx likes (the) picture of himselfx is quite flattering.
The bold the is in brackets because technically the determiner the does not reconstruct with the head of the relative clause picture of himself (Bianchi, 2000; Cinque, 2013; Kayne, 1994; Williamson, 1987 on the so-called indefiniteness effect on the copy internal to the relative clause). Reconstruction thus captures the similarities between (1) and (2) in a straightforward way.
In (2), the head of the relative clause served as the subject of the main clause. What happens when it serves as the direct object of the main clause?
(4) a. *Mrs. Cottony hates the picture of himselfx that Samx likes.
b. ?/*Mrs. Cottony hates the picture of himx that Samx likes.
c. Mrs. Cottony hates the picture of himx that Samx thinks that Rosie likes.
If the head of the relative is picture of him, the pattern is the same as in (1) and (2), which suggests that reconstruction has taken place. However, (4a) is ungrammatical for all the speakers that I have asked (this result is of great significance given what is usually said in the literature). This result is unexpected, especially if reconstruction is available in (4b) and (4c). If reconstruction were available, picture of himself should be able to reconstruct to the direct object position of likes inside the relative clause where it could co-refer with Sam, just like in (3). However, the only interpretation available in (4a) is the ungrammatical one where himself is trying to co-refer with Mrs. Cotton suggesting that reconstruction is impossible.
The difference between (4a) and (2a) lies in whether there is an element in the main clause that himself could get its reference from. In (2a), there is no such element, so picture of himself is forced to reconstruct so that himself gets a reference. In (4a), there is an element, albeit an unsuitable one. This suggests that the Binding Condition which allows himself to get its reference from another element applies blindly/automatically: himself gets bound to Mrs. Cotton automatically, which prevents reconstruction occurring. Later on, when it is time to interpret the binding relation, we discover that we were wrong to have bound himself to Mrs. Cotton, but by this time it is too late to perform reconstruction. This suggests that interpretation of syntactic structure only happens after all syntactic operations have finished. If it didn’t, we might expect that we could repair the mistake in (4a) by reconstruction. However, this is not what we find.
The same effect is also found in other constructions. Based on Browning (1987: 162-165), Brody (1995: 92) shows that (5) is acceptable suggesting that picture of himself has reconstructed to the direct object position of buy (the example is slightly adapted).
(5) This picture of himselfx is easy to make Johnx buy.
However, reconstruction is blocked if there is a potential element that himself could get its reference from, even if it turns out later to be unsuitable (Brody, 1995: 92).
(6) *Maryy expected those pictures of himselfx to be easy to make Johnx buy.
We have only touched the surface on reconstruction in relative clauses here (there are more reconstruction effects and more subtleties that I have been working on but which would take too long to lay out here). What we have concluded is that reconstruction is generally available in relative clauses (at least in English). This tells us that relative clauses are constructed with a copy of the head of the relative clause inside the relative clause itself. The problem is how to choose which copies to interpret. It seems that there are structural conditions which force certain copies to be interpreted, i.e. the choice is not completely free. Explaining what these conditions are can thus provide a fascinating clue about how the human mind works (and how it doesn’t).
If you’re keen to find out more, Sportiche (2006) gives a good overview of reconstruction effects and Fox (2000) develops a nice account of how interpretation interacts with syntactic structure.
Bianchi, V. (1999). Consequences of Antisymmetry: Headed Relative Clauses. Berlin/New York: Mouton de Gruyter.
Bianchi, V. (2000). The raising analysis of relative clauses: a reply to Borsley. Linguistic Inquiry, 31(1), 123–140.
Brody, M. (1995). Lexico-Logical Form: A Radically Minimalist Theory. Cambridge, MA: MIT Press.
Browning, M. (1987). Null Operator Constructions. PhD dissertation, MIT.
Chomsky, N. (1981). Lectures on Government and Binding. Dordrecht: Foris.
Cinque, G. (2013). Typological Studies: Word Order and Relative Clauses. New York/London: Routledge.
Fox, D. (2000). Economy and Semantic Interpretation. Cambridge, MA: MIT Press.
Kayne, R. S. (1994). The Antisymmetry of Syntax. Cambridge, MA: MIT Press.
Schachter, P. (1973). Focus and relativization. Language, 49(1), 19–46.
Sportiche, D. (2006). Reconstruction, Binding, and Scope. In M. Everaert & H. van Riemsdijk (Eds.), The Blackwell Companion to Syntax. Volume IV (pp. 35–93). Oxford: Blackwell.
Vergnaud, J.-R. (1974). French relative clauses. Doctoral dissertation, MIT.
Williamson, J. S. (1987). An Indefiniteness Restriction for Relative Clauses in Lakhota. In E. J. Reuland & A. G. B. ter Meulen (Eds.), The Representation of (In)definiteness (pp. 168–190). Cambridge, MA.
Last week I dropped into an exciting event happening here in Cambridge, the ‘English Usage (Guides) Symposium’, organised by some folk over in Holland who are boldly Bridging the Unbridgeable. I must admit it had nothing to do with my PhD. However, being a linguist, and therefore avowed descriptivist, but also a copyeditor’s daughter with the bad habit of tutting at every cheeky hyphen with aspirations of being an en-rule, I couldn’t resist.
They had brought together authors, linguists, linguists-cum-authors, usage guide writers, usage guide revisers, journalists, a syntactician, and even Grammar Girl herself for two days of exchange and moderately warm debate.
The most interesting questions that were floated throughout the symposium were those getting behind the issues. Why are there usage problems? Where do they come from? And, what do ‘we’ do about them?
But, first off, what are these usage ‘problems’ that shelves of usage guides have been written to sort out? Any feature of a language – construction, word, phrase – which is thought by some speakers to not adhere to convention, or, worse, to be downright incorrect, ‘not the proper way of saying it’. There are split opinions over split infinitives. People are, like, unsure over the use of ‘like’ as a discourse marker. Dangling prepositions are something which people get het up about. Between you and I (or should it be me?), ‘literally’ as an intensifier is literally making steam come out of some folk’s ears. And so on. You get the idea. If you want to find some more examples, try Fowler’s Modern English Usage, Sir Ernest Gowers’ Plain Words, or perhaps the letters page of your chosen newspaper.
Where do these usage problems come from? One thing common to most comments about some problematic feature is the perception that it is a new(fangled) development. But this is almost always not the case. As David Crystal pointed out in his delightful talk on metalinguistic mentions in Punch magazine 1841–1901, the first mention of the dreaded split infinitive was in 1898 – and this is perhaps surprisingly late. Quotative ‘like’ (‘and she was like, “what’s bugging you?”’) is thought to have spread from California into British English in the 1980s1.
What such comments do rightly hit upon, though, is that usage problems often arise with language change, perhaps as new words and grammatical constructions arise in the spoken language, but different, older conventions are adhered to in written communication. Together with this goes sociolinguistic variation – the coexistence of forms with similar meanings or functions in different sociolects, which leads to competition between them and some sort of value judgement. And, as was pointed out in the symposium by Pam Peters and Geoffrey Pullum, usage guides themselves, or rather their authors, sometimes practically invent usage problems because of very personal opinion or the sheer need to comment on linguistic features and create an elitist ‘eloquent English’2. So there’s a self-fulfilling aspect to this too.
But once a linguistic feature becomes a ‘usage problem’, why does it gain a market share in the linguistic interest not just of ‘pedants’, but the public in general? Why is it that everyone, it seems, if perhaps not entirely sure about when to use ‘who’ and when ‘whom’, has some sense that there is something they should know about this and expresses attitudes about different usages (and the users)? That was what one of the ‘Bridging the Unbridgeable’ researchers, Viktorija Kostadinova, was asking. Two main views emerged during the symposium. For some, like Grammar Girl Mignon Fogarty, what was clear was that speakers like to have quick, simple answers, black-and-white, right-and-wrong, for functional reassurance (e.g. ‘if I say that in a job interview, will I be disadvantaged?’). I suspect there may be some appeal of feeling superior as well (inwardly tutting at those who apparently don’t know better). For Geoffrey Pullum, on the other hand, our obsession with correct usage comes instead from a grammatical masochism – we want to punish ourselves by finding out all the rules of our language, and how we’re doing things wrong. Why, he asked, are we happy to consult usage authorities older than our grandparents when we wouldn’t dream of consulting a medical or physics textbook from the 1920s? It must be a pleasure in our mistakes. I’m not so sure about that. One idea that did strike me though, from Robin Straaijer, was the observation that, as we spend a lot of time investing in learning our (written) language – 13 school years here – we are then loathe to discover that that’s not how you say it any more.
So what do ‘we’ do about them? By ‘we’ here, I mean mostly linguists, who are often called upon, whether by friends or a newspaper journalist, to comment on controversial linguistic features. Three options were presented last week. Firstly, as the Bridging the Unbridgeable team are setting out to do, we can descriptively investigate the sociolinguistics of this phenomenon – what are users’ attitudes to which usage problems? Secondly, we can help nudge usage guides from the Arts (personal opinions about best usage) to the Social Sciences (based on actual examples of usage) by providing historical linguistic information about the emergence, or decline, of conventions (put forward by Pam Peters). Thirdly, we could try to save speakers from their apparent ‘grammatical masochism’ by pointing to linguistic features for which, even in a ‘standard’ variety, there appears to be no clear majority view on convention. Whichever route is taken, it seems that dialogue in this area is important as it is through usage guides and usage problems that many people begin to be curious about language.
A while ago, I found myself on a plane sitting close to a distinguished and well-dressed man in his fifties. He bought an Italian newspaper and showed me the picture on the front-page of the newly formed Monti cabinet, following Berlusconi’s resignations in 2011. “Isn’t it curious – he asked me with a sarcastic tone – that they picked exactly this day to form the new cabinet?”. As I clearly did not get his allusion, he continued: “and isn’t it curious that they nominated exactly thirteen new ministers?”. For some reason, my puzzled face aroused his talkativeness. How could I ignore how to decipher the messages they want to send through masonic numerology? For the remaining three hours of our flight, the man decided to charitably remedy my evident ignorance by talking uninterruptedly about any imaginable sort of conspiracy theory: “They want to leave us in a state of ignorance to control us better”, he said. “They are mostly aliens called reptilians and they occupy the positions of power all around the world”. “They know how to cure cancer but they wouldn’t let us know”. “They can control the weather through the chemtrails produced by airplanes”. “They make babies have vaccinate because they know it causes autism”. I was extremely fascinated, I must confess. Not only by the inexhaustible fantasy of this guy and of his sources, but also – from a linguist’s perspective – from the very words he was using to make his points.
Conspiracy theories have always been extremely popular, and they are especially widespread in the internet era. Scientific explanations and rigorous fact checking can be boring, time-consuming and ultimately disappointing and/or unexciting. If you surf the internet around the sea of conspiracy websites and forums, you will easily notice that most of the conspirationist discourse follows schematic stylistic strategies and is characterized by a similar sensationalist and allusive rhetoric. But one thing that fascinated me about the talk of my bizarre travel companion, and that I later noticed in most conspiracy texts on the internet, is the widespread “empty” usage of the third-person plural pronoun “they”. Popular blogs names are: “Truth they are hiding” and “Stuff they don’t want you to know”. In the 1997 movie “Conspiracy Theory”, the conspiracy-theory obsessed character Jerry Fletcher (Mel Gibson) complains: “I’m only paranoid because they want me dead.”
If you are – predictably – now wondering who they are, I suggest that you look at this instructive flowchart. Here, I will only be interested in the word “they” itself. Semantically, one can distinguish three uses of third-person pronouns in English: In example (1), “they” acts as a variable bound by the quantifier “few linguists”. In (2), “they” is anaphoric to “John and Mary” appearing in the preceding sentence. Finally, in (3), “they” is said to have a deictic or indexical use, i.e. it directly picks the group of people from the extra-linguistic context which is made salient by the speaker:
The interesting use of “they” in the conspiracy talk is of this third, indexical sort, i.e. cases in which there is no linguistic antecedent fixing the referent of the pronoun.
Indexicals are the paradigm of context-sensitive expressions: my utterance of the sentence “I am Italian” is different from your utterance of the very same sentence. In a sense, the two utterances share the same meaning, but they obviously have different contents and, probably, different truth-values. Such a straightforward intuition is at the core of David Kaplan’s theory of indexicals (Kaplan 1989), the mainstream theory of indexicals in semantics and philosophy of language. Incidentally, indexicals are also one big battlefield in semantics and pragmatics, with theorists questioning the very existence of a natural semantic class of indexical expressions or arguing about exactly which expressions fall into this category. The more conservative faction (e.g. Cappelen and Lepore 2005) struggles to show that we should restrict ourselves to Kaplan’s very limited list of indexicals (including what he called pure indexicals, like “I”, and “true demonstratives”, like “that”). Others, more liberal, are very willing to expand this list to the point of encompassing context-sensitive predicates like “red” (Rothschild and Segal 2009) or vague words like “heap” (Soames 2002). Still others think that indexicals are a bit like the stars: there are many more that we can see with the naked eye! According to these authors, a myriad of hidden indexical-like variables are attached to most words and are ultimately responsible for all effects of extra-linguistic context on semantic content (Stanley 2000 and Stanley and Szabò 2000).
Kaplan distinguishes between the character, the stable linguistic meaning of words and more complex constructions, and the content they express at each context. The character of indexicals is a rule that guides the search for the relevant referent(s) in the context. Kaplan formulates such a rule-based analysis in functional terms: thus, the character of “I” is a function which, at any context, takes as its value something like “the speaker” or “the agent” of the context and returns as its content the relevant person. Not all such rules are so straightforward, though. For example, the character of “here” is said to be a function that picks the location of the context, but the extent of such a location can be extremely indeterminate (in different contexts, “I was born here” can mean that I was born at this exact spot/in this country/in this world). Not all indexicals are so strictly constrained by linguist conventions: perhaps “I” always picks up the speaker of the context without requiring other information. By contrast, the linguistic meaning of a pronoun like “she” (in the indexical use) only specifies a few grammatical features (gender, number and animacy), leaving the identification of the relevant referent to the speaker’s intentions (i.e. to whomever the speaker has in mind with her use of the pronoun).
Our original “they” is even more unconstrained: it does determine the number of the referent but leaves other features completely unspecified and dependent upon the speaker’s referential intention. Be that as it may, the standard picture has it that indexicals are tools of direct reference: intentions and pragmatic reasoning may be more or less crucial for fixing the referent of an indexical, nonetheless the content of an indexical, in the standard picture, is the object to which it refers in the context. What about our conspirationist “they”, then? How should we make sense of the directly referential nature of indexicals vis à vis the referential indeterminacy and, possibly, emptiness of the conspiracy “they”? Well, I can think of three possible solutions. First, we could deny that the use of the pronoun here is really indexical: perhaps the seemingly indexical “they” is here actually a disguised (and very general!) description, as when Obama utters (4)
Here the indexical “me” is used descriptively to mean “The president of the United States” rather than the individual Barack Obama.
Secondly, we may say that the conspiracy theorists’ speech is completely meaningless, in the sense that a sentence of the form “They don’t want us to know such and such” does not express anything at all or at most expresses just a vague content, because the use of the indexical is accompanied by only an imprecise referential intention (Numberg 1993).
Or perhaps the conpirationist “they” is truly indexical and does actually refer… to the indexicals themselves! After all, semanticists know that indexicals are such an ugly beast to cope with: nobody would be surprised to find that they are nothing but conspiring indexicals.
 Such a continuous success of conspiracy theories in the European history is fascinatedly described and exploited in two novels by Umberto Eco, The Foucault Pendulum (1988) and The Prague Cemetery (2010).
 Actually, Kaplan’s content is itself a function (from a circumstance of evaluation to an extension). I will gloss over this point (and over many others!) for the sake of simplicity.
 But see Predelli 2005 for apparent counterexamples to this generalisation.
Ten days ago I was approached by Sylvia Tippmann, a young science journalist from London. Sylvia was very much interested in the phenomenon of bilingualism and wanted to record a short audio piece about raising bilingual children, including the neuro-cognitive aspects of speaking many languages. As someone who spends most of his time researching and thinking about words (and, if I’m not careful, thinking about thinking about words), I was easily persuaded to take part. It is my pleasure to share with you Sylvia’s excellent audio piece, featuring myself, Teresa Parodi and Napoleon Katsos from DTAL, Matt Davis from the MRC CBU, together with teachers, parents of multilingual kids, and other language specialists from around Cambridge and London.
(all errors in transcription my own)
[child talking in multiple languages]
We had concerns in the beginning, you know. Is it going to work out, whether two languages can work out, or three languages – we were not sure.
This is one of many parents in the UK whose first language isn’t English. His first child, three years old, grows up with German, English, and Chinese. He wondered if the child could cope. A quest to get to the bottom of these concerns led me to Cambridge. Napoleon Katsos is a professor of Linguistics. He tells me why people worry in the first place:
An uninformed, but perhaps intuitive view. As an adult, if you have to learn a foreign language, you struggle, right? You have to study, you have to think, you have to put effort in this – and people extrapolate from that and think that it’s difficult for kids as well. Therefore, if you have two difficult tasks ahead of you, you might not actually manage any one of the two. It’s quite unfortunate that it was a mainstream view until even in the middle of the 20th century, so you’d see researchers – well established researchers – say things like that. Now, what makes people a bit worried – and I think is also contributing to this view that bilingualism is a problem, rather than an opportunity – is that in the early years of life, because the child has less input in each language than the pure monolingual child would have in the language that they speak, it is likely that the bilingual child will not progress as quickly as the monolingual.
My own daughter, I’m afraid, because she was spoken to by her mum in French when she first started growing up, her language development in English was being delayed and she was getting upset at school because she couldn’t express herself in English. We went to see a speech therapist and the speech therapist told us to speak just English to her in home, and I regret this decision to this day.
The attitude to raising children bilingual in the UK is a bit different to non-English speaking countries, in the sense that parents and schools are more reticent towards exposing their child to a further language…
This is Teresa Parodi, professor of applied linguistics and an English, Spanish, German, French multilingual herself.
My specialisation is on language acquisition, and I have worked extensively with families in bilingual situations. There is an idea that maybe it’s a burden for the child at a young age and we should wait until the child is a bit older. The answer to that is: it is not a burden. Children just learn it – and the sooner, the better. The younger they are exposed to a language, the better will be the results.
Of course, there is not a magic trick – the number of words is proportional to the time you spend hearing and talking a language. Katsos explains:
So, they might have a smaller vocabulary. An English-Chinese bilingual child might have a smaller vocabulary in English, compared to the monolingual English, and a smaller vocabulary in Chinese, compared to the monolingual in Chinese. [multilingual child names food in English and German] So, if you now count the vocabulary differently – if you give a point for every concept for which the child knows the word, regardless of whether it is in English; if you count all the concepts that a child knows a word for, then bilingual children don’t lag behind monolingual at all. What has been an extremely consistent finding is that, given enough input in both languages, the bilingual child can become as good as two monolinguals, as good as a monolingual in each of their languages. A bilingual child that has sustained input in both languages – that is, both languages are spoken in this child’s environment, and both languages are respected, none of them is looked down upon so the child has motivation to use them and comprehend them – they will catch up with the monolingual by late primary school, between the ages of 10 and 12.
Cambridge has its own International School, teaching children from ages three to sixteen, with a variety of language backgrounds – from Chinese, over Norwegian, to Hebrew.
My name’s Corrine, I’m the head of Infants here at Cambridge International School. Every single child is different. For example, in my class this year, I had three children who had not any English at all. So people think: “Oh, they can speak English! It’s only been a year – fantastic!” but the level of understanding is not deep enough. It takes children between five and seven years to be at the same level as their English peers.
Even if children are much faster learners, they also don’t acquire English in an instant if it’s their second language at school. Another reason to expose them to multiple languages right from the very beginning.
Of course, there is an issue of how children learn two languages at the same time, or one after the other, what happens in development. Now, if the child is exposed to two languages at the same time, there is a fear that the child may be confused, and not know which language is which. Research tells us they are never confused. They know socially which language to speak to whom.
Yes, my mum speaks Spanish with my dad, and my dad speaks French to me.
Linguists and children seem to agree – there is nothing to worry about. In fact…
The main advantages of growing up bilingual are in different domains. On the one hand, we have very obvious social advantages: one can talk to someone who speaks a different language, integrate in a different culture… If you look at the labour market, you have one more labour market where to look for a job.
In fact, bilingualism has beneficial advantages for cognitive functioning. Bilingual children, because they engage in this mental gymnastics where the languages that they learn sort of compete with one another – they might have to use Chinese to address their mother, and the next second they might need to switch to English to talk to their best friend – this makes bilingual children better at a set of cognitive functions which we call executive control.
Nikola Vukovic, a PhD student in applied linguistics, explains what that means…
Well, these are very general skills. If you’re in a noisy room, and you need to isolate someone’s voice from the noisy environment, then this would be the kind of skill that helps you. Or: if you need to multitask as well, to juggle different kinds of information at the same time, then again, this kind of control would help you a lot.
Executive control, you can think about, we need that for any time that we need to do something new. You need executive function when you have to go to the supermarket instead of going straight home. [laughs] You have to go to the post office… Any time you need to do something you don’t normally do, you need executive function…
This is Arturo Hernandez, professor pf psychology at the University of Houston in the US.
My area of expertise has been the neural basis of bilingualism. The question that has been brought up is: do bilinguals use executive function to a greater extent because they have to, in some ways, either speak a language they don’t do normally, and so they have to use a lot of executive function of overcome their native language or their better language, which comes out naturally. The other one is: do they use executive function because they have to switch? The discussion in the literature is that multitasking, true multitasking, doesn’t exist. What actually happens is called task switching. So if I’m driving and I’m on the phone, I’m actually very quickly switching between the conversation that I have on the phone and driving. The question is – since bilinguals use two languages and they switch, are they better switchers?
Yes, I’m thinking about Tristan, who was trilingual… and he would speak to his father in German, and he would turn mid-sentence and change to Swedish, to finish the sentence to his mother. Mid-sentence! It’s amazing.
How does Tristan’s brain juggle two languages? I asked Nikola again.
In your mind, instead of having just one word for “running”, for “smiling”, you have two or three or more words, so what this results in is competition, and the need for your brain to manage all of these different languages at the same time. Bilinguals really cannot help but activate all of their languages, at all times. The way normal people would think about this is that when you’re using English, you’re just using English. When you’re using German, you’re using German. But actually what we find in experiments is that both of your languages are active at all times, and you can see that, for example, when people process words that sound similar in different languages. So the word “rock” can mean a stone in English, but in German it also sounds very similar to a “skirt” [Rock, in German]. When a bilingual encounters such a word – even in a completely monolingual English environment – they are still going to activate the German meaning, in addition to the L2 [second language] meaning. But of course, they’re speaking English at that time, and so it’s important to them to only pay attention to the one relevant meaning that the need. So what they’re going to do is suppress, all the time suppress, the irrelevant word in the irrelevant language. What this does cognitively – doing this for years and years – is it strengthens your executive capacity.
Does that change anything in our brains? I found an expert in the Cognition and Brain Sciences Unit, who had answers.
My name’s Matt Davis, I’m a neuroscientist based in Cambridge. I’m interested what goes on in the brain when you’re understanding language or when you’re learning language. There’s several different questions there that’ve been addressed by research using different brain imaging methods. So one of the first things that I’ll say is that, as far as we can make out, the two languages that a bilingual knows are stored in the same brain circuits – they’re kind of overlapping knowledge of the two. This is an interesting and important challenge for the bilingual who has to remember that when they’re speaking French they don’t say “chair”, they say “chaise”; or to remember that in English “sank” is the past tense of the verb “sink”, whereas in French it’s the number “five”. So, keeping the knowledge of the two languages separate is actually quite a challenge for the bilingual brain. Part of the reason it’s a challenge is that actually it’s the same neural circuits, the same parts of the brain that are involved in storing both languages. There are parts of the brain which become larger in people who speak more than one language. So, there’s some research that shows that parts of the parietal lobe [he points to his head, above his left ear and a bit to the back] This part of the brain has more tissue, more neurons, in it for people who speak more than one language. The more proficient they are in that language, the bigger that part of the brain grows.
So what does all this mean in practice? What is the best support for a child who grows up bilingual?
I think the biggest question we get from parents… they would often ask: “Should I speak to my child in English at home, or should I speak our mother tongue, German for example,” and we’re always promoting their mother tongue – because if the child understands in their mother tongue they’re more likely to transfer that concept or that skill into English.
If two people are living in a country that speaks yet a third language, I would still think that it’s a good thing to keep and speak their own language, and the child is exposed to a third language in the social environment. Nothing untoward happens in my experience – what happens is the child learns three languages. [laughs]
All the research done around bilingualism hints we probably shouldn’t be making a big deal of it at all, and enjoy the positive aspects that come with learning additional languages.
So, in a way, bilingualism is not going to introduce any negative effect in that respect. But it *will* introduce a positive one – namely, the fact that, all of a sudden, you have two languages to use, two cultures to explore, two sometimes completely different ways of looking at the world and of processing the information in the world, focusing on different aspects… So I would say that that’s actually a huge benefit, rather than something to be worried about.
[child speaks in a foreign language]
End of Recording
A version of this post also appeared on June 2014 on my personal blog, which is accessible here.
My research focuses on the syntax of relative clauses. A typical relative clause is a type of subordinate sentence which modifies a noun. For example,
(1) a. the book that I’m reading
b. that blog post you’ve written
c. the man who saw me
d. a type of subordinate sentence which modifies a noun
The underlined clause is traditionally called the relative clause. They are theoretically interesting for a number of reasons. Some syntactic ones are: they are optional, i.e. nouns do not require a relative clause; the noun being modified seems to play a role in both the main clause and the relative clause; relative clauses resemble other constructions to a greater or lesser extent, e.g. interrogatives, possessives, etc.
The head of the relative
One of the major debates in the syntax of relative clauses lies in where we say the noun being modified originates in the syntactic structure (I will call this noun the relative head from now on). Consider the following example:
(2) You wrote the book that I’m reading.
Intuitively the relative head ‘book’ is the direct object of the main clause verb ‘write’. We also understand that ‘book’ is the direct object of the relative clause verb ‘read’. How can it be two things at once?
One option is to say that ‘book’ is base-generated, i.e. enters the syntactic structure, as the direct object of ‘write’ and is co-indexed with a relative pronoun in the relative clause (if two items are co-indexed, it basically means they refer to the same thing). This relative pronoun may be ‘who’, ‘which’ or silent (or ‘that’ depending on your analysis). Adopting the silent option and symbolising this silent pronoun as REL.PRO (for ‘relative pronoun’), the sentence in (2) would have the structure in (3) (the relative clause is placed in square brackets and the co-indexing is symbolised by the subscript ‘i’).
(3) You wrote the booki [REL.PROi that I’m reading]
But how does this capture the idea that ‘book’ is also the direct object of ‘read’? For this we say that the REL.PRO has moved from the direct object position of ‘read’ to the left edge of the relative clause. This gives the structure in (4).
(4) You wrote the booki [REL.PROi that I’m reading REL.PROi]
This captures our intuitions about how ‘book’ relates to the main clause and the relative clause. This is the sort of analysis found in Chomsky (1977) and Sauerland (2003), for example.
Another option would be to abandon co-indexing and say that ‘book’ is base-generated as the direct object of ‘read’. Instead of having a silent REL.PRO move to the left edge of the relative clause, the head of the relative itself moves (I use a subscript ‘1’ to symbolise that the two occurrences of ‘book’ are two copies of a single item rather than two independent items).
(5) You wrote the [book1 that I’m reading book1]
We would then say that the copy of ‘book’ in the direct object position of ‘read’ is not pronounced but is nonetheless present in the structure since we are able to interpret ‘book’ as being the direct object of ‘read’. The copy at the left edge of the relative clause is pronounced, giving the sentence in (2). This is the sort of analysis found in Kayne (1994).
The head, the ‘the’ and the relative clause
The type of relative clause we have been looking at is called a restrictive relative because it restricts the possible denotation of the noun. For example, (6) means that you wrote something and that something is a book AND that something is being read by me. In other words, the direct object of ‘write’ has to satisfy both the condition of being a book and being something that I’m reading. It allows you to distinguish this book from one that I’m not reading.
(6) You wrote the book that I’m reading.
To capture this, we say that the head of the relative and the relative clause are in the scope of the determiner ‘the’.
(7) [the [book that I’m reading]]
This can be captured in the syntactic structure by saying that [book that I’m reading] forms a constituent which excludes the determiner ‘the’. Now we have an interesting problem: ‘the’ appears with nouns, not clauses, which might suggest the following structure.
(8) [the [book [that I’m reading]]]
In this structure, ‘the’ requires a noun and so selects ‘book’. The relative clause modifies ‘book’ and so attaches to ‘book’. But there is evidence suggesting that the presence of ‘the’ is tied to the presence of the relative clause (a * means that the sentence is ungrammatical).
(9) a. London is beautiful.
b. *The London is beautiful.
c. The London that I remember is beautiful.
d. *London that I remember is beautiful.
A proper name, for example, ‘London’, cannot ordinarily appear with ‘the’ (hence the difference between (9a) and (9b)). However, when a proper name is modified by a relative clause, ‘the’ must appear (hence the difference between (9c) and (9d)). This suggests that ‘the’ requires the relative clause and not the noun! The following structure captures this idea (see Kayne, 1994).
(10) [the [[book] that I’m reading]]
Now, we have to come up with a way of relating ‘the’ to the head of the relative ‘book’, unless we want to abandon the idea that ‘the’ typically appears with nouns (an idea which might not be as crazy as it sounds). We could say that ‘the’ and ‘book’, by virtue of being close enough to each other in some non-technical sense, can enter into a relationship. Note that ‘book’ does not have a determiner of any kind. This is unusual in English.
(11) a. *I like book.
b. *Book is good.
We could therefore say that ‘book’ has an empty position for a determiner (I’ll call it D) that enters into a relationship with ‘the’ (see Bianchi, 2000).
(12) [the [[D book] that I’m reading]]
We can now make a prediction: if some other element occupies this D position, ‘the’ cannot form the required relationship and the sentence will be ungrammatical. A preposed genitive competes with ‘the’ in English, as seen in (13).
(13) a. the book
b. Bob’s book
c. *the Bob’s book
Now, if a preposed genitive occupies the D position that ‘the’ is aiming to form a relationship with, there will be trouble because ‘the’ and a preposed genitive cannot both be related to this same position, as seen in (13c). If ‘Bob’s’ is present, ‘the’ cannot be, but if ‘the’ is absent, the relative clause must be absent too. This accounts for why (14) is ungrammatical.
(14) *You wrote Bob’s book that I’m reading.
The only way to say what (14) intends to say is not to prepose the genitive, as in (15).
(15) You wrote the book of Bob’s that I’m reading.
Since ‘Bob’s’ no longer occupies D, ‘the’ is free to form a relationship with D and the sentence is grammatical.
That concludes this introduction to the syntax of relative clauses. We have seen that relative clauses are complex and have quite a counter-intuitive structure once we delve into the systematic patterns of grammaticality and ungrammaticality manifested in English. But that is the way of things – language is a part of the natural world and, just as theoretical physics is dumbfounding us with discoveries into the weird and wonderful nature of the physical universe, so too can theoretical linguistics make discoveries about the underlying structures of our linguistic universe (and all that without a Large Hadron Collider … for now).
Bianchi, V. (2000). The raising analysis of relative clauses: a reply to Borsley. Linguistic Inquiry, 31(1), 123–140.
Chomsky, N. (1977). On Wh-Movement. In P. Culicover, T. Wasow, & A. Akmajian (Eds.), Formal Syntax (pp. 71–132). New York: Academic Press.
Kayne, R. S. (1994). The Antisymmetry of Syntax. Cambridge, MA: MIT Press.
Sauerland, U. (2003). Unpronounced heads in relative clauses. In K. Schwabe & S. Winkler (Eds.), The Interfaces: Deriving and interpreting omitted structures (pp. 205–226). Amsterdam/Philadelphia: John Benjamins.
This week has yielded a bumper crop of language-in-the-media articles, not least, with a nice Cambridge link, Sir Leszek Borysiewicz championing the benefits of bilingualism, appearing on the same day as a report of more findings to show that speaking more than one language has an effect on cognition in old age.
Today, though, I want to go back a couple of weeks to a feature in the New Scientist about a new piece of research to come out of the MIT Media Lab: “Kindergarten bots teach language to tots”. It reports on an experiment just underway which aims to find out how well young children learn language from robots, and so enhance their education1.
These aren’t some old-school micro-processor-and-a-bit-of-meccano robots that no child would think of befriending; nor, it seems, do they employ technology so new and unaffordable that making this study into classroom reality is unthinkable. No, they’re ‘sociable robots’, fluffy and marsupial, and those winsome eyes you see are displayed on a smartphone, which doubles up as the computer behind the robot. It can be remotely controlled from a tablet, either to instruct the software, or to directly manipulate the robot.
I don’t know what your reaction is to this idea. Mine oscillates between impressed enthusiasm and sceptical discomfort. But, disregarding feelings, does it work? Well, the study runs for 8 weeks, so we’ll have to bate our breath until then. In the meantime, though, we can think about whether we expect it to work, given what we know already about how children learn language.
Let’s concentrate on just one part of language acquisition, word learning. There are, broadly speaking, three approaches to how children learn words (typically looking at nouns, as the ‘simplest’ case)2. Nativist views make a strong claim for in-built constraints which narrow down the hypothesis space, such as the whole object principle (nouns refer to whole objects) or mutual exclusivity (each object has only one name). This could align with a Universal Grammar-type account, but doesn’t have to be generativist itself (all generativist accounts are nativist, but not all nativist accounts are generativist). Secondly, there’s a functional, socio-pragmatic view: what’s important is the child’s desire to communicate, to understand the speaker’s intention. Finally, associationist views suppose that associative and attentional mechanisms create biases over time which aid word learning. What would each one say about bots for tots?
First up, constraints. If a child hears a novel word from the robot, say when looking at a picture book (or, in this day and age, a tablet) together, then the whole object constraint will lead them to reason that the whole object, not just handle, is a wrench.
Or, if they hear wrench but can see two objects, one unfamiliar and one they know, then they can reason “each object only has one label. I know that one is called a hammer, so ‘wrench’ can’t refer to it, so it must refer to the other, new object”.
So, on the view that children have in-built constraints for learning words, as long as they recognise the interaction with the robot as a language situation, they should be able to develop their vocabulary with it.
Next, in the functional, socio-pragmatic view, children learn the meaning of words through their general understanding of the situation, and especially what the speaker is trying to communicate. The key ingredients are an ability to share in joint attention, including gaze follow, to infer the speaker’s intentions, and to have some expectation of co-operativity in the communication, as outlined, for example, in Grice’s maxims (be informative, be relevant, be brief and orderly). Gaze follow and joint attention? The robot has eyes, so, I guess, tick. Inferring intentions? Hmm, trickier. Does the robot have intentions? More importantly, does the child attribute intentions to their bot, and expect it to behave communicatively in a conventional way? We all know that children readily ascribe personality and animacy to teddies, dolls and other toys, and indeed many word learning experiments rely on this fact, as the method involves a puppet (or, occasionally, ‘max the silly gorilla’) labelling a novel object or testing the child’s comprehension.
The associationist view attributes the apparent constraints or intentions to low-level associative processes, of salience (attention to what is most salient), blocking (of a new association by an existing association with the stimulus), context dependency (memory is aided by like contexts), and cross-situational learning (comparison of cue–stimulus associations across contexts). What does a child need, then, in this view, to learn a new word? A cue (the novel word), a stimulus (the object referred to), and multiple instances of these co-ocurring across similar and dissimilar contexts. All of these the bot can provide, especially as it can track the infant’s vocabulary use and adapt stories appropriately.
So, it looks pretty good for these sociable robots. They’re interactive, and so the minimal features of the main theories have no problem stretching from caregiver–child dyads to robot–child ones. The MIT researchers seem to have good reason to be hopeful. But will it really work? These fluffy bots may aid the child’s vocabulary growth, but there’s so much more to language. Maybe such studies will be able to tell us something more about exactly what it is in the language input from older humans or the child’s developing social and cognitive abilities which allow them to acquire language so effectively.
1. Kory, J., Jeong, S., & Breazeal, C. L. (2013). Robotic learning companions for early language development. In J. Epps, F. Chen, S. Oviatt, & K. Mase (Eds.), Proceedings of the 15th ACM on International conference on multimodal interaction, (pp. 71-72). ACM: New York, NY.
2. Proponents of the constraints, functional and associativist views respectively include:
Markman, E. M., & Wachtel, G. F. (1988). Children’s use of mutual exclusivity to constrain the meanings of words. Cognitive Psychology, 20(2), 121–157.
Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press.
Smith, L. B. (1999). Children’s noun learning: How general learning processes make specialized learning mechanisms. In B. MacWhinney (Ed.), The emergence of language (pp. 277–303). Taylor & Francis.
For an excellent overview of the field, see:
Ambridge, B., & Lieven, E. V. (2011). Child Language Acquisition: Contrasting Theoretical Approaches. Cambridge Univeristy Press.
Early this morning, long before you decided to read this blog post, your body awoke to a new day. Your arm torpedoed up, perhaps, smashed onto the table top, and with unthinking precision incapacitated the alarm clock. Your other arm, still weak from the anesthetised stupor of preceding hours, rubbed some clarity into your out-of-focus eyes. What went on next was a sequence, as individualised as it is ritualised, of noisy drawers opening and closing, twisting faucets, toothpaste tube caps and brushes, shower doors, and refrigerator doors. Perhaps, while having breakfast, you turned on the radio or TV. You learned, from the way compressed air waves made by the stereo speaker hit your ear drums, that today is going to be another rainy day in Cambridge, and that you’d better grab an umbrella before going to work.
Having arrived to work, and chatted with your office friends, and having turned on your computer, you are finally ready to begin productive activity. That is, after you read this one blog post. Well, I’m here to say that already you should be proud of yourself! The kind of physical and cognitive acrobatics that you effortlessly squeezed into a single morning are beyond the capacity of any other species known to man. Indeed, a whole army of cognitive science graduates could wither away trying to understand how you’re doing what you’re doing right now – looking at squiggly lines on a computer screen and extracting sense from them. However, you would be surprised to find that cognitive science, when looking for an explanation of how you achieve these mental feats, thought it more instructive to look not at you or the way you act and move in the world, but at your computer. This is not because you accidentally forgot to erase your browsing history but because, for the last century or so, scientists believed that human thinking is computational. What goes on in your head when you understand language, according to traditional thinking, is in fundamental ways no different than running a very sophisticated computer program123. Out of this idea was developed an entire orthodoxy – namely, that linguistic thinking is based on the manipulation of abstract symbols (as abstract as a computer’s binary code) by a computational mechanism that is entirely modular and “informationally encapsulated” (just like any computer component is modular). The goal of cognitive science became the specification of what these computations are, and because an algorithm is an algorithm whether it runs on a PC or a Mac, it needn’t worry about the fact that human cognition is implemented in the wet-ware that is a biological body.
However, a wealth of behavioural and neuroscientific research conducted over the last twenty years has uncovered a radically different picture of language. Far from being abstract and symbolic, meaning is rooted in our bodily interactions with the environment (like the ones described before), and thus grounded in sensorimotor mechanisms4. For instance, scientists found that hearing or reading a word such as “run”, “punch”, or “smile” activates in your brain the very same areas which you use to perform the action of running, punching, or smiling5. Similarly, they found that words related to smells activate olfactory circuits, visual terms depend on the visual system, and sound-related words spark the auditory cortex. In other words, language is neither completely symbolic, nor is it modular and independent of modal neurocognitive mechanisms. To understand a word, new research shows, is to perform a mental simulation which reuses the very same representations and mechanisms used during action, perception, and emotion. Indeed, researchers have found that people simulate a whole variety of perceptual features, including shape, size, colour, and more6,7. We even simulate abstract features such as event duration in this way. Did you know, for example, that it takes you longer to process a sentence such as “Road 49 crosses the desert” if I tell you that the desert is 400 miles in diameter, as opposed to only 30 miles?8 Other studies show that your body posture and movements can influence the way you understand sentences. For example, responding to a string such as “You deliver the pizza to Andy” is faster if the response involves moving your hand away from your body. Similarly, participants are faster to judge sentences such as “She delegates the responsibilities to you” when the response movement matches the motion implied by the sentence9. What all of these studies suggest is that people understand language, even when it is abstract, in terms of bodily action and the motor programs used in movement. Neurobiologically, this supports theories of language learning which place experience at the centre stage, and which argue that meaningful language units are instantiated in neuronal assemblies arising through the coupling of action-perception circuits 10.
The last century marked a Cartesian effort to dissociate mind from body, reason from emotion, language from the lived and embodied realities in which it is used. While early psychologists ousted the study of mental phenomena from their science, focusing solely on observable behaviour, the violent compuatationalist backlash which they provoked threw away the baby with the bathwater, alongside everyone happening to be in the apartment at the time. Only recently have people started to pay attention to both the mental and the physical constraints which shape our thinking. The newfound realisation that higher cognition and our sensorimotor systems form an interconnected and dynamic whole places theorising on a much firmer footing, and promises a more unified, coherent, and comprehensive linguistics and cognitive science.
Two of the things that take up a lot of my time these days are language and salsa. Language is the object of my research: this fascinating tool that human beings acquire almost effortlessly in their childhood years, which is, at the same time, a system so complex that many of its aspects still constitute unresolved puzzles for linguists. Salsa is currently my favourite hobby: a dance of Cuban origin which is danced in couples with occasional rapid change of partners when a number of couples dance in a circle. Given how much of my time language and salsa occupy, it didn’t take long before I started thinking about the relation between the two (as if it’s not already bad enough that PhD students invest time and effort in drawing correlations between concepts, the relevant processes become so ingrained that they spread to other aspects of their lives. Sad? Geeky? I’m not really sure).
What does salsa (or any kind of partner dance, for that matter) have to do with language, you may wonder. Well, knowing how to dance means that you are in a position to meet people on the dancefloor whom you have never seen before, and interact with them, achieving a certain (preferably graceful) motion outcome.This becomes possible thanks to the use of appropriate signals which the man as the leader needs to convey accurately, and the woman as the follower needs to interpret correctly (and no, before you ask, these dances are not a suitable context to talk about feminism). So, just like talking, this kind of dancing in couples requires communicative accuracy between participants – a feature that makes these exchanges of meaning different from expressions of meaning we may attribute to solo dancers (or any other artists, for that matter) which are more open to alternative interpretations on the audience’s part. In this sense, communication between salsa dancers who have never met before does not seem very different from a conversation between people who meet for the first time at, say, a wine reception and use a common language to convey and interpret thoughts, facts, jokes. Of course, the language of dance has a much more limited scope than natural language. In fact, the meanings conveyed by dancing signals could be translated into natural language along the lines of ‘go left/right’, ‘turn’, ‘give me your hand’, ‘pass underneath my arm’ etc. But my aim here is to point out in what way the two might be similar: both dancing and talking are based on an exchange of mutually understood signals that make up a system of communication; knowledge of the system makes it possible to invent new combinations of dance moves, or create new sentences on the spot.
This comparison between language and dance takes us to the wider issue of the relationship between language and thought: Do the two overlap? Are they distinct? Can we have one without the other? The example of communication between dancers suggests that there are ways to formulate and communicate thoughts without the use of words, which in turn implies that language and thought do not coincide. In the video below, Nicky Clayton, a professor of comparative cognition, and the artist Clive Wilkins consider a few more such examples of thinking and communicating that go beyond language. These include the performance and observation of tricks with cards which create expectations about what outcome would be predictable or surprising, communication between birds, and more.
For linguists and anthropologists, the question of the relationship between language and thought is (among other relevant debates) associated with the idea of linguistic relativity, and the Sapir-Whorf hypothesis (you can read more about this here: http://plato.stanford.edu/entries/relativism/supplement2.html). The latter suggests that the language we speak determines the way we think about the world, confines the concepts we can entertain, and even constructs our reality. This is a particularly strong claim that mainstream contemporary linguistic theory does not accept, in part due to lack of experimental evidence in its support. At most, our native language may influence to some extent the way in which we draw distinctions, or categorise concepts in our mind, but there is no reason to believe that it is a cause of fundamental differences in the way we think or perceive the world (for more on this, see Pinker 1994, Ch. 3).
A commonly cited piece of trivia relating to this discussion is that Eskimo languages have a much larger than average number of words referring to the concept of ‘snow’; this might indicate that speakers of these languages can perceive finer distinctions of types of snow, as opposed to speakers of, say, Greek. But this is probably a difference about the reality of the populations in question which is simply reflected in their languages, rather than an indication of radical differences in the way Eskimos and Greeks think about snow. In other words, the daily life of Greeks simply does not place them in situations where they have to think about snow very often, which is why they have not bothered to recognise and lexicalise as many distinctions between different kinds of snow. Another similar example I once heard and found fascinating is that of the conceptualisation of time in Malagasy language in a way that is reversed with regard to most other languages (Dahl 1995). In Indo-European languages, for example, we tend to conceptualise time linearly, with ourselves facing the future and having the past behind, which is reflected in expressions like ‘I look forward to x’, ‘Those sad events are behind me now’ etc. But Malagasy conceptualises time as a backward movement with the face turned towards the past, because, after all, the past is something we have already faced, whereas the future is completely unknown. This conceptualisation is reflected in expressions denoting time in this language. But as fascinating as this difference may be, it does not necessarily suggest that the concept of time for speakers of Malagasy is fundamentally different from ours. A more plausible conclusion is that, among the many possible ways in which we could talk about the world, different populations simply made different choices. Moreover, using examples such as the above to claim that alternative concepts are not conceivable unless one’s native language has words for them – which is the idea behind linguistic relativity – seems counterintuitive, because if this were the case, we would be unable to expand our existing inventories to accommodate words for new concepts.
The idea that seems to put language and thought into a better perspective is that language facilitates thought: consider situations where we have to count things; uttering the words for numbers while doing so definitely helps the process. Then there are those times we instantaneously conceive ideas which we then decide to put into words, either by explaining them to someone, or by writing them down. And it is only then, when thoughts are transformed into words, that the abstract ideas become more tangible. They may not even sound so good anymore, because possible logical fallacies and incoherencies of arguments become more evident when the ideas are spelled out. But these examples merely suggest that thinking becomes more efficient if supported by language, which is arguably why verbal thinking is relied upon so much in modern societies. It does not follow from this that language and thought overlap, nor that language determines the way we think. The fact that (to choose an example at random) while dancing salsa it is possible to conceive, communicate, and interpret thoughts without the use of natural language suggests that language and thought are two distinct fields. But this does not undermine the power of language in any way; in fact, the ability of language to build bridges between people’s minds and its presence in most of our daily activities is the reason we tend to equate the two, and the reason I had to resort to considerations about salsa dancing to point out that language and thought do not actually coincide.