This is essentially a follow-up to my previous post, with a more practical focus, but it shouldn’t be necessary to read the earlier post to understand this one.
The pro-gun activists in the United States have a slogan: “guns don’t kill people, people kill people” (parodied by Welsh rap act Goldie Lookin Chain in the song Guns Don’t Kill People, Rappers Do.) The basic idea, presumably, is that guns, being inanimate objects, clearly cannot take responsibility for killing: rather, the responsibility for killing lies with people who use guns to that end. (And therefore we should, the argument goes, focus our attentions on stopping people from using guns to kill people, not on getting rid of guns themselves.) Even if we disagree with the sentiments behind this, we have no trouble understanding what is meant.
This is an interesting use of language because, from a strictly literal viewpoint, it’s undeniable that guns do kill people. Not as animate, volitional “agents”, of course, but nevertheless Guns kill people is a perfectly acceptable English sentence. And indeed, it’s quite normal for inanimate, non-volitional “instruments” to be used as subjects: there’s nothing syntactically or semantically wrong with Scissors cut paper or The knife sliced easily through the soft, white cheese.
Perhaps, we might argue, kill is different from cut or slice – it requires an animate agent as its subject. (Maybe it’s a bit like eat, which can’t take an instrument as its subject: as I pointed out in my last post, we can’t usually say The fork ate the peas to mean “someone ate the peas with the fork”.) But this is clearly false: surely nobody has any problem with The avalanche killed the skier or Trains kill people who ignore red lights at level crossings.
No, Guns kill people is fine (strictly speaking, at any rate). But the aforementioned slogan does highlight something interesting about attitudes to language: although there’s nothing ungrammatical or unmeaningful about a sentence with an instrument as its subject, there is nevertheless a feeling that volitional agents make better subjects, and perhaps that it may even be in some sense incorrect to use an instrument as a subject when an agent would be available instead.
We see something similar in the arguments by cycling campaigners (e.g. this article) regarding the use of language in journalism relating to road collisions. Often, newspapers phrase things along the lines of A car collided with a cyclist or A lorry ran over a pedestrian. This, the cycling lobby claims, is undesirable because it appears to remove responsibility from the drivers of motor vehicles: cars and lorries do not generally run into things of their own accord, but because of actions taken by their drivers. In other words, given that in such incidents there is an agent (the driver), it is infelicitous to promote an instrument (the vehicle) to the status of subject.
Of course, in parallel with the gun case, an inanimate thing like a car or lorry is a perfectly acceptable subject of a verb like collide or run over as far as grammar or literal meaning is concerned. But the cyclists’ arguments nevertheless highlight, and indeed rest upon, an intuition that volitional agents are once again “better” subjects than instruments. Ordinary users of English have an impression that some types of construction are preferable to others, even when both are technically acceptable: an impression which links closely to what linguists have described as “thematic roles” like agent and instrument. This intuition may seem to support the linguistic analysis that agents are subjects by default, and instruments are only promoted to subject status when an agent is absent.
(In other cases the line between what is merely inappropriate and what is grammatically/semantically unacceptable becomes blurred. The article I linked to gives the example of [the cyclist] collided with a van, referring to an incident where the van was driven into the cyclist from behind. We would probably think of the cyclist here in terms of the thematic role of “patient”: he was not the principle cause of the action, didn’t bring it about on purpose and was the participant most affected by it. Is the use of a patient as a subject syntactically acceptable (as the journalist would appear to think), even if it is an undesirable phrasing, or is it just wrong in every way?)
So: even though things like thematic roles may seem like quite abstract linguistic concepts, it appears that they do have a role to play in the ways in which even non-linguists think about language – and in what is deemed advisable not merely semantically and syntactically, but socially as well.
Something you might find surprising if you delve into sign language literature is the familiarity of the terminology. When you see the word phonology you probably think about the study of sounds. You might even be shocked to discover that there is phonology for sign languages. This post will explain how the phonological terms of spoken languages can be applied to sign languages.
Spoken language phonology identifies the smallest contrastive sound units of language. In spoken languages phonemes differ in various ways (for example, place of articulation, voicing or aspiration). We know that phonemes are contrastive in a certain language when we find minimal pairs where only one of these features differs. For example, when we say the English words came and game, we know that the only difference between them is the voicing of the first consonant but this contrast is enough for them to be considered two different words. Place of articulation (PoA) is also contrastive. Game and dame both start with voiced stops, but one is velar and one is (usually) alveolar and this marks them as separate words. However, certain speakers pronounce dame with a dental stop (some Scottish accents, for example). As alveolar and dental stops are not contrastive in English, both would be considered acceptable variations of the same word. What this tells us is that not all contrasts are meaningful in all languages. We find the same situation when we look at contrastive units of sign languages.
PoA is contrastive in sign languages as well as in spoken languages. In sign languages PoAs are not places along the vocal tract but are various body parts where a sign takes place. These are called Locations. The same exact sign produced in two different Locations yields two different meanings. SEE and TELL in British Sign Language (BSL) are identical apart from their Location (from the eyes for SEE and from the lips for TELL) and this difference is what gives them separate meanings. Location is the first of five parameters that make up the phonology of signs.
The second parameter is Handshape. There are many possible handshapes and each sign language uses a certain sub-set of these as meaningful components of the language. Again, we can identify the handshapes used in a particular sign language by looking for minimal pairs. BSL, for example, has a contrast between a fist with the little finger raised (the [I] handshape) and a fist with the thumb raised (the [Ȧ] handshape). When we keep all other parameters the same and change just the handshape, two different signs are produced, for example PRAISE and CRITICISE. There are also handshapes that are contrastive in other languages but not contrastive in BSL. In American Sign Language there is a contrast between a fist made with the thumb over the fingers and a fist made with the fingers resting on the thumb. BSL does not have this distinction and use of either handshape for a sign such as EUROPE would not alter its meaning.
The orientation of the hand used in a sign is the third parameter. Orientation is the exact direction in which the handshape faces (upwards/downwards, leftwards/rightwards and towards/away from the signer). Even in gesture we can see how important hand orientation is, as we get a very different meaning if we turn the two-fingered peace sign around. In Britain this is offensive, yet this orientation may be seen as simply a variant of the same meaning in other cultures. Again, we can find minimal pairs in BSL where the only difference between signs is the orientation of the hand, for example NOW and BRITISH (the former having the handshape oriented palm up and the latter palm down).
The fourth parameter in sign language phonology is Movement. This parameter concerns exactly how a handshape moves in a sign. LIVE and FEEL have the same location, handshape and orientation, but the movement (repeated up and down Movement or short upwards Movement) marks them as distinct signs.
The final parameter concerns the non-manual features (NMFs) of the sign. This parameter includes facial expressions and lip patterns. There are some signs that share the same Location, Handshape, Orientation and Movement and are only differentiated by NMFs. By including English mouthing alongside the sign, we can clarify whether a sign means GARAGE or GERMANY. As well as mouthing, there are facial expressions in sign languages that distinguish between signs. For example, there is an NMF that marks negation (head shakes, mouth turns down and eyebrows raise and furrow). The sign MILK with the negation NMF becomes NO-MILK. There are other NMFs that can mark size and quantity. For example, one NMF indicates great size (cheeks puff out and eyebrows raise) so BAG with this NMF can become BIG-BAG.
These five parameters are the same across all sign languages and, like spoken language phonology, each sign language has restrictions on the way in which these parameters may combine. Certain combinations of these parameters are phonotactically illegal (for example, some Handshapes are not made in certain Orientations). Orfanidou et al (2009) found that when they presented BSL signers with phonotactically illegal nonsense signs, signers often used phonotactic knowledge to correct them. This suggests that native signers, like native speakers, have an underlying understanding of the phonotactics of their language.
Although phonology may at first seem about as far away as possible from the study of sign languages, I hope this post has shown that spoken language terminology and concepts can be successfully applied to another language modality. If you enjoy reading about sign linguistics, have a look at BSL QED’s short linguistics notes on BSL for more.
Sutton-Spence, R., & Woll, B. (1999). The linguistics of British Sign Language: An introduction. Cambridge: Cambridge University Press.
Orfanidou, E., Adam, R., McQueen, J. M., & Morgan, G. (2009). Making sense of nonsense in British Sign Language (BSL): The contribution of different phonological parameters to sign recognition. Memory & Cognition, 37(3), 302–15.
Again. A useful little word. Rather common. Rather uninteresting? Absolutely not! It’s kept a considerable number of linguists in work for the past 40 years. Consider this sentence.
Now, what does ‘again’ add to the information conveyed here? It must be the case that Frederick had opened the door at some point before. This makes ‘again’ a presupposition trigger. The sentence it is part of does not just assert something – the proposition that Frederick opened the door – but also presupposes, or assumes, something else – that he had done it another time, and that other time was before the time that is asserted. This means that ‘again’ joins other additive particles like ‘too’, and ‘as well’ which behave in a similar way (consider, ‘Frederick opened the door too’, which presupposes that someone else also opened the door).
But the fun doesn’t stop there. Have a look at these two contexts for our ‘again’ sentence:
Context A is what we’ve been thinking about already. The important thing is that Frederick had opened the door before, somehow it was shut, and now he’s doing it for a second, or nth, time. But would you agree that Context B also works as a background for our sentence? And here Frederick has not opened the door before; he’s reversing what he’s just done, restoring the door’s state of being open. For this reason, the reading in Context A is often called repetitive, and in Context B restitutive.
Perhaps you’re thinking: what’s so surprising about this? Doesn’t this just make ‘again’ like loads of polysemous words that have several related meanings. (Think of ‘newspaper’ here: I read the newspaper that my friend works at). Well, some linguists (like Fabricius-Hansen, 2001) would agree with you. Others, noticing that in both cases there is repetition – either of the whole event of Frederick’s opening the door, or of the door’s state of being open – have tried another approach, one that has been fundamental to the development of decompositional semantics (Dowty, 1979).
The problem is that in Context B, what is repeated is not the action of opening the door (the verb ‘open’), but only part of that meaning, the end result – the state of the door’s being open. How can ‘again’ effect (or scope over, to use the technical phrase) only part of a verb’s meaning? Perhaps it’s because the verb’s meaning itself is made up of more basic building blocks. One solution (Dowty, 1979; von Stechow, 1996, Beck, 2005) is to decompose ‘open’ into CAUSE, BECOME, open (the capitals just tell us that these aren’t the same as English words, but rather semantic operators). Very informally, you then get something like this:
We can then drop ‘again’ in at different spots, giving us the repetitive reading (a), and restitutive reading (b) – ‘again’ scopes over what comes in the brackets to the right:
This may seem neat, or it might strike you as like constructing a theoretical Taj Mahal to house a guinea pig. But actually it’s more appealing than that, because we can see that lots of telic verbs (that’s verbs with an inherent endpoint) work in the same way:
Plus a host of other types of verb, that we don’t have time to get into here.
One intriguing point, though, is that breaking down the verb meaning into these more basic building blocks, between which, at the semantic level, ‘again’ can nestle, opens up perhaps more possibilities than we want.
What context would make this semantic structure true? Context A, certainly, but also C:
Here the action of the window’s being opened is repeated, but not the whole event including the agent (Frederick). Do we ever get this interpretation? It’s hard to tell, because such Context C also entails Context B (repetition of the door’s being open), and it’s hard to disentangle our intuitions. In a study for my masters degree, I looked into real speakers’ intuitions (not those dodgy linguists’) about such sentences and got mixed results for whether scenarios like this are acceptable:
What do you think?
And that’s just the start of the fascinating properties of that innocent word ‘again’. Look out for another post, where I explore ‘again’, again!
Beck, S. 2005b. “There and Back Again: A Semantic Analysis”. Journal of Semantics 22, 3-51.
Dowty, D. 1979. Word Meaning and Montague Grammar. Dordrecht: Reidel.
Fabricius-Hansen, C. 2001. Wi(e)der and Again(st). In C. Féry and W. Sternefeld (eds), Audiatur Vox Sapientiae. A Festschrift for Arnim von Stechow. Berlin: Akademie Verlag. 101–130.
von Stechow, A. 1996. The different readings of wieder ‘again’: A structural account. Journal of Semantics 13: 87–138.
von Stechow, A. 2003. How are results represented and modified? Remarks on Jäger & Blutner’s anti-decomposition. Modifying adjuncts, 416–451.
A secret vice – this was how J.R.R. Tolkien described his love of creating, crafting and changing his invented languages. With the popularity of his books and the modern film adaptations, the product of this vice is no longer as ‘secret’ as it once was – almost everyone will have heard of Elvish by now; some will have heard of Quenya and Sindarin; and a small number will have heard of more besides …
I started thinking about the theme of this post having read this article from the Guardian on constructed languages (or ‘conlangs’):
Conlangs can be used to add depth, character, culture, history among many other things, but I think that Tolkien’s invented languages are in a class apart from other famous invented languages, e.g. Klingon, Na’vi, Dothraki, Esperanto, etc.
What many people don’t know is that Tolkien’s Elvish languages weren’t ‘invented for’ the Lord of the Rings, or the Hobbit or even what was to become the Silmarillion. In fact, in many ways it is more accurate to say that these stories and legends were invented for the Elvish languages!
Tolkien’s Elvish languages began to grow at about the time of the First World War, and they continued to grow for the rest of Tolkien’s life. Tolkien gave to two of these languages, Sindarin and Quenya, the aesthetic of two of his favourite languages, Welsh and Finnish respectively. However, rather than develop comprehensive dictionaries and grammars of the Elvish languages, Tolkien approached their invention from a primarily historical and philological perspective – something that the other famous conlangs do not do to anywhere near the same extent.
Sindarin and Quenya were designed to be natural languages, i.e. languages with their own irregularities, quirks and oddities (like real-world languages) but whose peculiarities would make sense when looked at from a historical linguistic perspective. Furthermore, Sindarin and Quenya are related languages, i.e. they share a common (and invented!) ancestor. Whenever Tolkien compiled anything like a dictionary, it was more akin to an etymological dictionary or a list of primitive roots and affixes. He would build up a vocabulary using these roots and affixes then submit the results to various phonological changes (as well as language contact effects, borrowings, reanalyses, etc thrown in for good measure! Did you know that the Sindarin word heledh ‘glass’ was borrowed from Khuzdul (Dwarvish) kheled?). The result is a family of related languages and dialects.
But these languages and dialects needed speakers, and their speakers needed a history and a world in which this history could play out. Tolkien believed that language and myth were intimately related – the words of our language reflect the way we perceive the world and myths embody these perceptions and are couched in language, yielding a rich melting pot of associations. To appreciate something of what Tolkien might have felt consider the English names for the days of the week or the months of the year. Why do they have the names they do? What does this tell us about our heritage and cultural history? What does it say about what we used to think and feel about the world? Now imagine thinking like this about other words … I found out earlier this week that English lobster is from Old English lobbe+stre ‘spider(y) creature’ (incidentally, lobbe ‘spider’ provided Tolkien with the inspiration for Shelob, the giant spider from The Two Towers (or, if you’re more familiar with the films, The Return of the King)). That is the kind of philological delight Tolkien wanted Sindarin and Quenya to have, and they do (nai elyë hiruva)!
In my last post, I wrote about some characteristics of tones (among others, they can “float”) and the theory of their origin – the science of tonogenesis. I mentioned that tones are highly areal: They either have a huge presence in a language family (Niger-Congo & Sino-Tibetan) or hardly show up at all (Indo-European). Even among regions where tones show up in large numbers, there are still significant differences in how they typically behave. Traditionally, tonologists tend to concentrate on either African (esp. Bantu) tone languages or Asian (esp. Chinese) ones, with relatively little conversation between the two camps. This is partly due to historical reason, partly because the points of interests are so very different between these two groups of languages. I will use today’s and my next post to introduce salient characteristics of African and Asian tone languages, and show how their impact on our understanding of phonology and of course, language.
African tones are famous for their mobility. The Bantu language of Chizigula (aka Zigula), spoken in Tanzania and Somalia, provides a particularly striking example. In this language, a verb is either toneless, or one of its syllables carry a H (high) tone. When I talk about verbs, I am really referring to verbal stems, which you can think of the basic form of a verb without all the affixes. As it often happens in African languages, Chizigula has a rich morphological system, with potentially layers of affixes. The interesting thing is, when a Chizigula verbal stem with an H tone gets suffixes, the H tone always moves to the penultimate (second-to-last) syllable in the newly affixed verb. I said “always”, because the H tone is absolutely hellbent on moving, no matter how many syllables it has to jump in doing so. Consider the Chizigula verb for “request”, with and without suffixes, in (1).
(1a) lómbez ‘request’
(1b) ku-lombéz-a ‘to request’
(1c) ku-lombez-éz-a ‘to request for’
(1d) ku-lombez-ez-án-a ‘to request for each other’
Example (1a) shows the verbal stem, /lómbez/, where the H tone is attached to the segment /o/, marked with an accute accent. We take this tonal assignment to be basic and “underlying”, given the verbal stem appears in isolation here. In (1b), with the addition of suffix -a, the H tone moves to the right to the now second-to-last syllable, /be/. In (1c) and (1d), with progressively more suffixes added, the H tone moves further and further to the right (no pun here, for those politically conscious), but true to its form it always ends up with the penultimate syllable, even when this means moving three syllables away from its underlying position.
So Chizigula tone is a travel freak. What’s so interesting about that? Well, as I alluded to in my last post, the consequence of this and other findings about tonal mobility is nothing short of revolutionary for phonological theory. One resultant insight is that tones are “autosegments”: they are autonomous and independent from segments, from which they can leave, across which they can move and onto which they can dock. Phonologists formalise this insight by positing separate tonal tier and segmental tier, linked by association lines. I won’t go deeper into the fine theory, except to say that this formalism, in essence, is what we today know as Autosegmental Phonology. The following graph depicts how the itinerary of Chizigula H tone is represented in this scheme.
The H tone is originally linked to the syllable /lom/; under the pressure to have all H tones docked onto the penultimate syllable, the H is delinked from /lom/ and re-links with the penultimate /be/. You can easily extend this scheme to (1c) and (1d): all you have to do is do the same delinking operation, and then re-link the H tone to /ze/ in (1c) and /za/ in (1d).
Another insight gained from Chizigula tones’ unusual migration pattern has to do with the problem of locality. Linguists tend to think that linguistic objects like tones can’t walk around unrestrained from where they should’ve been. In other words, there must be some kind of “locality condition”, by which objects only move to an adjacent position. Chizigula tones moving three syllables away from their underlying position obviously stretches our definition of adjacency. In response to this and other so-called long-distance processes, phonologists now recognise “relativised locality”, in contrast to the stricter version of “absolute locality”. In a nutshell, it’s not the absolute distance (x syllables or y segments) that determines adjacency, but whether there are obstacles along the path of movement. Chizigula tones can do long-distance travel because nothing intervenes on their path; if there were low tones in Chizigula and one of these should stand between H tone and the penultimate syllable, the H tone may well have to cancel its travel plan. One of the languages that do have this blocking effect is Luganda, where H tone spreads freely until encountering an L tone.
(2a) à-, bala, e-, bi-, kópo
(2b) à-bálá é-bí-kópo ‘he counts cups’
When all the stems and affixes in (2a) stand alone, only the first syllable of /kópo/ has a H tone, and the prefix /à/ has a L tone; the rest are toneless. When these are strung together to form the sentence in (2b), the H tone has spread and occupied four syllables until stopped by the L tone on /à/. This is but one small example illustrating relativised locality – more examples can be found in vowel and consonantal harmony processes.
Thus in just one tonal process, a data point from a language spoken by around 20000 people, we have seen so much of tonal phonology, and tonal phonology at its best. From the way Chizigula H tone moves (and Luganda H tone stops moving), we have a solid piece of evidence showing how our brain manipulates mental objects – the autonomous movement of tones (Autosegmental Phonology) and the condition on their movement (relativised locality). Things will get still better, however, when we move to Chinese tone sandhi next time.
1. Autosegmental Phonology
Goldsmith, John A. 1976. An overview of autosegmental phonology. Linguistic Analysis 2. 23–68
Excellent slides on autosegmental phonology by Jochen Trommer. The figure from this blog is taken from his lide http://www.uni-leipzig.de/~jtrommer/Nonconcatenative/1a.pdf
2. Relativised locality
Nevins, A., & Vaux, B. (2004). The transparency of contrastive segments in Sibe: Evidence for relativized locality. GLOW, Thessaloniki.
Vaux, B. (1999). Does Consonant Harmony Exist. Presented at the Linguistic Society of America Annual Meeting.
A few days ago, a friend of mine engaged me in conversation about gendered pronoun usage in English. After spending some time considering the topic, they’d decided it would be a positive political move to start using gender neutral terms by default—not assuming, in other words, that they could automatically guess the gender identity and pronoun preference of anyone they met. They wanted my view—as a linguist, as well as someone interested in queer and feminist politics—on the practicality of switching to entirely gender neutral pronoun use, and on which pronoun was the best option.
A lot of ink, both figurative and physical, has been spilt on the issue of pronoun choice and gender neutral pronouns. Most mainstream discussion of the topic has concerned how to refer to individuals of unspecified gender in formal writing. Traditional style manuals advocated using ‘he/him/his’ in this context, but this has been criticised from a feminist standpoint for a long time. To my ears—and, I assume, to others of my generation—using ‘he/him/his‘ to refer to individuals of unspecified gender now sounds stylistically weird nearly to the point of ungrammaticality. You’ll more commonly come across other solutions in formal writing, like ‘she or he / her or him / her or his’ or ‘they/them/their’.
In queer politics, the same need for a gender neutral pronoun also arises for a different reason. People who are neither female or male, or not solely female or male, such as nonbinary trans* people, may feel the need for a pronoun that doesn’t misgender them. In these spheres ‘they/them/their’ is common, but other alternatives are also used such as the so-called Spivak pronouns ‘E/Em/Eir’ as well as the gender neutral pronoun ‘ze/hir/hir’ used by some online genderqueer communities.
My friend’s proposal is radical, but is not unique. One similar example you may have come across in the news in the last few years comes from Sweden. Some preschools in Sweden such as Egalia in Södermalm practice genuspedagogik—pedagogy focused on highlighting the effect of gender on children in educational contexts—and aim to use the recently coined gender neutral pronoun ‘hen’ (instead of feminine ‘hon’ or masculine ‘han’) for all children. This pronoun is a convenient fit in Swedish: it obviously resembles the feminine and masculine forms, and it happens also to have the same form as the (gender neutral) 3rd person pronoun in neighbouring Finnish. It is beginning to gain a little ground in Swedish: it has been used in children’s books, in parliament and even in a published legal judgement.
In this post, however, our focus is on the linguistic issues involved. So can you just choose to use a new, gender neutral pronoun in lots of contexts where your native grammar specifies you should use a gender marked form? Will people understand you? If it is possible, which of the various options are preferable?
Introducing a new pronoun into a language is an unusual enterprise. Languages add new words all the time and speakers have no trouble acquiring and using them, but these are what are referred to as ‘open class’ words: nouns, verbs, adjectives, adverbs. These classes of words are open in the sense that they can be added to and speakers have many strategies (derivational morphology) in their grammars for doing this. Consider a recently coined word like ‘selfie’: it’s immediately obvious how it has been formed and how that composition relates to its meaning. However, ‘closed class’ or grammatical words, such as pronouns, auxiliary verbs and prepositions, are much harder to coin. Speakers have no strategies for creating these words, but instead seem to list them as a fixed—closed—set in their mental grammars. So when we try to add a new one, we’re not really engaging in a normal linguistic process, and accordingly it’s a lot harder for such usage to become entirely automatic and unconscious in the way that most of our language use is. Anyone at home in queer social spaces is probably aware of how easy it is to make mistakes in using others’ preferred pronouns, especially when those pronouns are neologisms such as ‘E/Em/Eir’ or ‘ze/hir/hir’.
Nevertheless, there’s no reason to believe it an impossible task, and, I think, several good reasons to assume that it’s quite feasible. Speakers clearly do change their grammars over the course of their lifetimes, at least in minor ways, as they’re exposed to new grammatical variants through diffusion (that’s the spread of new forms between speakers). This is one of the normal processes of language change, and is going on all the time. Although this individual change is limited, the evidence is that the sort of changes that are easiest for adult native speakers to acquire are structural mergers—changes which remove a previously maintained grammatical distinction. And the introduction of a gender neutral pronoun is effectively just such a merger.
In addition, the target situation—one in which the 3rd person singular pronoun used in many, most or even all situations doesn’t encode gender—is perfectly normal, cross-linguistically. The map below (reproduced from WALS) shows gender distinctions in independent pronouns in languages across the world: white dots represent languages with no gender distinctions, and it’s easy to see that they’re pretty common.
So which gender neutral pronoun should my friend pick? Obviously this is primarily a political question. Nevertheless, I think that we get can another interesting insight here by comparing the introduction of a gender neutral pronoun to ‘normal’ (that is, not consciously initiated) language change. Generally, when innovative forms or usage patterns enter into a language, they do so by gradual spread along many axes: they spread between adjacent geographical areas, between interconnected social groups, by a gradual increase in frequency, and, crucially, gradually from linguistic context to linguistic context. A new form is generally innovated in a particular grammatical context from which it spreads, first to very similar grammatical contexts and eventually to very different ones. As a result, we’re relatively used to coming across and acquiring new usages that are partially familiar to us but have been extended to related-but-slightly-different contexts.
The proposed gender neutral pronoun that most resembles this ‘natural’ situation of language change is ‘they/them/their’. This already exists in most varieties of spoken Modern English as a gender neutral pronoun used in contexts where the gender of the referent is unknown or the referent is non-specific—in speech, sentences like ‘if someone wants a piece of cake, they should have one’ don’t sound at all marked. You might even come across it used by speakers where they are intentionally avoiding mentioning a referent’s gender, such as when maintaining the anonymity of someone in an anecdote. So where for the other proposed pronouns it would be necessary to introduce an entirely new form, for ‘they/them/their’ all that’s needed is to extend the use of an existing form into new—but clearly related—contexts.
Traditional grammar makes use of the terms “subject” and “object” to describe the roles of nouns in a sentence. Prototypically a subject does the action; an object has the action done to it, for example:
Now this is all very well up to a point but when we want to use “subject” and “object” as general labels referring to the meaning of a noun in relation to an action, we run into problems even within a language like English (for some problems that arise cross-linguistically see my previous post). Consider the following:
|(2)||the book||is read by Lucy|
In this (passive) sentence, the book has the same relation to the action described by the verb as before, but it appears as the subject not as the object. In case there’s any doubt about this, consider the following pair of examples:
|(4)||I||am loved by Lucy|
I in (4) behaves exactly like a subject: it is in the nominative case (I and not me) and it triggers agreement with the verb (I am), as well as preceding the verb. Yet its relation to the act of loving is pretty much the same as in (3).
Linguists have dealt with this problem by coming up with the notion of thematic roles, employing labels like AGENT and PATIENT to describe them. Unlike the relations of subject and object, these remain constant whether the sentence is active or passive:
|(6)||the book||is read by||Lucy|
Lucy is the agent in both sentences; the book is the patient.
Whilst linguists have not yet managed to come to any sort of agreement as what all the different thematic roles actually are, the notion nevertheless helps us in making a number of interesting observations. For example, a lot of verbs denoting changes of state can occur both as intransitives (with only one noun, or “argument”, involved in the action) or as transitives (with two arguments). This is the case for example with freeze (the words in capitals refer again to thematic roles):
|(7)||Nick||froze||the ice cream|
|(8)||the ice cream||froze|
What happened to the ice cream is the same in both instances (it froze), and therefore it seems rational to give it the same thematic role (here labelled EXPERIENCER). In (7), though, the ice cream is an object; in (8) it is the subject. One possible analysis of this is that when the CAUSE argument (e.g. Nick in (7)) – to be understood simply as the argument which causes the change of state described by the verb to occur – is not expressed overtly, an EXPERIENCER is “promoted” into the now-vacant subject position.
This might, in fact, be similar to what we see in the passive. Compare example (4) above with example (9) below, where in the absence of an agent the patient is promoted to subject:
|(9)||the book||is read|
As a general rule, we might want to say that all sentences require subjects, and that while these are preferentially agents or causes, they may be patients or experiencers too if no agent or cause is available.
Another thematic role which has been suggested is that of INSTRUMENT. In the following example, this is the role of the knife:
|(10)||Tiberius||sliced||the bread||with||the knife|
An instrument, informally speaking, is the thing which the agent uses to effect the action. But instruments can also occur in subject position, e.g.
|(11)||the knife||sliced||the bread|
the knife here can still be considered an instrument: unlike a typical agent, it isn’t doing the action of its own accord, and we assume there is still some unexpressed agent responsible for the slicing. So we have another type of possible alternation: where an agent is omitted, an instrument may be promoted to subject position in its place.
It’s fascinating that as a result of alternations like this the subject of a verb can actually be associated with multiple possible meanings. To give some more examples, the following show that a subject of break might be associated with at least three different roles:
|(12)||Wilhelmina||broke the window (with the snowball)|
|(13)||the snowball||broke the window|
Equally fascinating is that in some cases these alternations can’t occur. For example, in Imhotep ate the peas with a fork, we have an instrument a fork – but eat can’t take an instrument as its subject like break or slice can: we can’t (generally) say *A fork ate the peas to mean “some unspecified person ate the peas with a fork”.
These possible and impossible alternations seem to suggest a lot about the nature of the lexicon and/or the grammar. They are therefore invaluable tools to linguists seeking to understand better how languages work.
Well, it had to be done. It’s that time of year when each subject tries to find its link to Christmas, its festive cheer, however tenuous.
Our very own Cambridge University website (‘for staff’ section) posted this seasonal offering last week: ‘Festive tastes have changed but Christmas is still a cracker’. It’s about a (pilot) study with a corpus of spoken English – much needed for corpus-based linguistics, of course, historically limited to the written mode. But rather than ‘language as a window into the mind’, it’s ‘language as a window into society’. For example: “When it comes to Christmas stalwarts, sherry and brandy appear to have fallen out of favour over the last 20 years, replaced by vodka, gin and even champagne, all of which are being talked about more”.
I’m not personally convinced of whether we can draw firm conclusions about what’s important/popular/uplifting from frequency of word-use – are we in the realms of sociology not linguistics? – but it’s a festive, fun and thought-provoking read. And most importantly, it tells you how you can contribute to the Spoken British National Corpus.
Happy Christmas to all our readers!
Every day the useful and now omnipresent Google serves me up a selection of ‘language’ news items. Quite frequently, these are not much to do with Language at all, or at least as we Linguists think about it (‘body language experts say X is lying when he says…’), but this morning this one caught my eye: ‘How to speak MONKEY: Researchers uncover the sophisticated primate language – which even has local dialects’ . I was intrigued to see whether this was just another round in the ‘oh yes they talk like us, oh no they don’t’ match, so clicked through to the Daily Mail site.
As I scanned down the page, what really caught my eye was the word ‘implicatures’ – in common parlance in linguistic circles, of course, but a tad surprising in this particular publication1. And with my Gricean hat firmly on, questions immediately exploded in my mind: monkeys and implicatures?! Apes computing implicatures. That’s a surprising thought, because implicatures, in a Gricean world, are inferences you make about a speaker’s meaning with reference to their intentions; it requires ‘mind-reading’, or Theory of Mind. But nonhuman primates have at most only first-order intentionality, not the second-order required2. And such inferences are what makes human communication so rich, versatile, and, well, human.
In fact, the newspaper piece was a report of an article recently published in Linguistics and Philosophy, less sensationally entitled ‘Monkey semantics: two ‘dialects’ of Campbell’s monkey alarm calls’3. Tempting as it was, I admit I did not wade through all 60 pages of the original in detail. But here are the main points:
Here you can see where the implicatures came into all this: the ‘strengthening’ of meaning that the authors suggest is very much akin to the strengthening of meaning in scalar implicatures, e.g., from some (and possibly all) to some (and not all). Specifically, in Tai the alternatives are krak-oo, hok and hok-oo, and given that the more informative ‘weak general threat’ or ‘(weak) aerial threat’ are available but not used, these are negated and krak is left with the meaning of ‘dangerous ground threat’, which in this region is leopards. However, ‘dangerous ground predator’ is pretty contradictory on Tawai, as there are no such predators, and so the inference does not occur.
The authors are quick to point out that of course this has to be a very simple inference: “all that is needed is a—possibly automatic, unconscious, and non-rational—optimization device by which more informative calls ‘suppress’ less informative ones” (p.480). But then, is it much like an implicature at all, as I described implicatures earlier?
The interesting thing is that as soon as you start doing a formal description of nonhuman language using the same categories and terms, you have to ask again what you mean by them when applied to human language. You see, I was giving you just one view of implicature, and scalar implicature in particular. That view, in the tradition of Grice, holds that when a speaker produces an utterance which does not hold to the maxim of quantity, in that it could be more informative (e.g., I ate some of the cookies would be more informative if it were I ate all of the cookies), the hearer reasons that, given that the speaker is rational, co-operative, informative, etc, there must be a reason that they did not utter the more informative alternative, namely that it isn’t true. Trying to attribute such kind of pragmatic processes to other primates, going on present knowledge, is not going to pass muster.
Another approach, however, has scalar implicatures down as something much more grammatical – there is a silent and invisible element in the sentence that adds in the extra meaning. This means that some (but not all) of the inference can be derived without complex higher-order reasoning about speaker intentions. And it’s perhaps on this kind of view that one can talk about ‘monkey pragmatics’.
Incidentally, Julia Fischer, of the German Primate Centre, also considers “the investigation of the role of previous and actual contextual information on animals’ responses to signals as one of the most exciting challenges in our field. Studying animal pragmatics may turn out to be more fruitful than assessing the symbolic or syntactic aspects of animal communication”4. Here, it seems that ‘pragmatics’ is being used in yet another sense – any inference in communication that draws on cues from context.
So here, from our distant relations, we have a reminder that ‘Pragmatics’ for humans is a murky area, with vastly differing scopes and approaches; pragmatics for primates may be an even harder nut to crack.
2. first-order vs second-order ToM
First order intentionality involves intending to change another person’s behaviour; second-order their state of mind.
Theory of Mind is what, on one theory, enables us to think about other people’s thoughts; a full-blown Theory of Mind has yet to be found in any non-human.
3. Schlenker, P., Keenan, C. S., Ryder, R., & Zuberbühler, K. (2013). Monkey semantics: Towards a formal analysis of primate alarm calls. In Twenty-Third Semantics and Linguistic Theory Conference (pp. 3–5).
4. Fischer J (2013) Information, inference and meaning in primate vocal behaviour
In: Stegmann U (ed) Animal Communication Theory: Information and Influence.
Cambridge University Press, Cambridge, pp 297–317