Generational gap x language purism = language death?

logo_webThe International Congress of Celtic Studies XV in Glasgow featured a discussion roundtable on the future of the “Celtic” Languages, initiated and organized by yours truly. With several experts presenting papers on the state of Irish, Scottish Gaelic, Welsh and Breton as well as some contributions on Manx and some Cornish activists present, the event did not lack in expertise – nor in interest with a healthy audience of a few dozen adding to the nine panelists. The reason for this interest lies in the fact that all of surviving Celtic languages are to some degree endangered (see previous post) or indeed revived, with Welsh the healthiest of the bunch with about half a million regular speakers, while Breton (which had around 1 Mio speakers 100 years ago, 40-50% of whom are estimated to have been monolinguals, speaking only Breton fluently), is close to disappearing as a native language.
Note the careful differentiation of terms used concerning the “speakers” of a language: Speakers of a minority language can be categorized in different groups, sometimes in up to eight different ones. Common ones are “traditional native speakers” and speakers of a non-traditional variant (particularly in Irish). However, if you look at the non-fully native varieties, you can see the various (often useful) distinctions. There are for instance “heritage speakers”, speakers who come from a certain (minority) linguistic background (say, Irish) based on both of their parents’ mother tongue, but due to the utter dominance of another language in the community (for example English), the language they learnt first and foremost (theoretically) is not their best language. The level they attain in their heritage language can vary considerably, depending for instance on the level of pervasiveness of the dominant language and whether the heritage speaker grew up in an emigrant context or a minority language context in the country of origin itself. Some languages exist almost exclusively as heritage languages, as it could be argued increasingly for Scottish Gaelic. Irish (as well as Welsh, and to a lesser extent Scottish Gaelic) has the added dimension of featuring a substantial amount of “new” (native) speakers, which could be seen as the opposite of heritage speakers, as well as occasional speakers of various levels of competence. New (or neo-) speakers are speakers that were brought up by parents predominantly in a language (typically a minority language) which was not their own native language, for example English-speaking parents raising their children in Irish. Interestingly, the reversed case, namely parents trying to educate their children in English despite being more or less monolingual Irish-speakers themselves especially in the 19th century and following the Irish Famine (1845-48), played a fundamental role in the decline of Irish as well as the development of Hiberno-English with its unprecedented output of great literary works around the turn of the 19th century.
Additionally, among learners there is a large group of pupils speaking Irish (or Welsh). But the transfer out of the school context, never mind the ‘holy grail’ of intergenerational transmission upon which the survival of a language is generally deemed to hinge, remain elusive. Where the transfer has generally worked is with cultural community speakers that meet in conversation groups just for a love of the language. However, with all non-fully native speakers the level and frequency of usage varies hugely. The vast majority never quite transcends a basic level or attains fluency combined with grammatical accuracy.
With this background to the discussion round, the group of panelists was carefully chosen to include representatives of both the traditional languages and the new varieties, often coinciding with different generations (referring to speakers as well as the academics representing them).
One major issue of minority language policy is the dichotomy between traditional and non-traditional speakers. In extreme cases like Breton, these two groups are partly unable (or unwilling) to communicate with each other. (Fascinatingly, this dichotomy proved to be transferred to some extent to the academics!) The problem is that in practice, it appears to be a trade-off of which group to focus on. This is not so much based only on budgetary constraints, though they may play a role, but rather on the fact that there is an identity gap between these groups, which often coincides with a generational gap. Instead of perceiving themselves as a privileged and united group or linguistic community, which does happen in theory and in some cases, of course, there is often a tendency towards growing resentment between the groups. At the core lies the double-edged sword of the issue of identity and language purism: Neo-speakers and even more so learners sometimes struggle with their linguistic identity, especially when faced with traditional speakers, who in turn feel their native language to be a possession of their own which is being tarnished by often untraditional or even entirely ungrammatical forms of learners. (Imagine almost everyone you talk to speaking incomprehensible English, while another, better medium of communication is readily available – similar to the “Whose English?” debate, only with heightened stakes due to the precarious situation of the languages.) In Brittany, this leads to volunteer native speakers often being politely declined to teach at schools to bridge the gap so as to “protect” learners from the disillusionment of their own shortcomings in Breton proficiency. Manx, on the other hand, being a dead-and-revived language without traditional native speakers, has shed that burden of the different speaker groups and hence forms more of a linguistic unit, albeit a rather small one with a couple of hundred speakers at best.
While panelists were initially at pains to deny existence of the chasm between traditional and neo-speakers, it was increasingly difficult to paper over the cracks during the discussion. The trade-off of traditional language purism and getting more speakers to actually use a language (through their own choice as well as by being given the opportunity by other speakers and the state’s language policy) until we find a solution for the identificational and speech-generational gap.
The Fifth Cambridge Conference on Language Endangerment (Friday, July 31st) will address similar topics.

Poetry on the Hands

If you ask a primary school child what a poem is, you might get a reply as simple as “words that rhyme.” However, as adults we know that poetry is far more complex and that it can take many different forms. If we were to limit our understanding of poetry to simply “words that rhyme” we would miss out on whole swathes of English-language poetry but also on poetic forms in other languages, such as Haikus or classical Chinese tonal poetry. If we understand that spoken language poetry is not exclusively about rhyme, then we should also acknowledge that poetry itself is not exclusively spoken or written. In this post, I will briefly write about some features of sign language poetry and compare these to spoken language poetry.

Rhyme
Rhyme is probably the first thing that pops into your mind when you think about poetry. Perfect rhyme (like that between ‘plate’ and ‘date’) is the most obvious, but it is just one of a number of possible rhymes. For example, poets frequently use assonance (words sharing a vowel sound, like ‘purple’ and ‘curtain’), consonance (words sharing consonants, like ‘bitten’ and ‘better’) and alliteration (words with the same initial consonants, like ‘shiver’ and ‘shake’). They can use these rhymes to create a certain effect, for example using a series of fricatives (like [s], [z], [ʃ] or [ʒ] in English) to evoke the sound of the sea. Rhyme essentially employs phonological features to create artistic effect. The same is true in sign language poetry. In my last post, I discussed sign language phonology. Sign languages exploit phonological features to create rhyme in the same way spoken languages do. Poets create rhyme by using a series of signs that share one or more phonological features (handshape, location, movement, orientation and nonmanual features). For example, a BSL poet might describe a scene of snow falling in a forest whilst a deer walks past a log cabin with a fire inside. This would be a series of rhymes in BSL because SNOW, FOREST, DEER and FIRE all use the same handshape (you can see rhyme with repeated use of a flat handshape in Walter Kadiki’s ‘Butterfly Hands’, below). Similarly a poet might use a series of signs with the same movement or the same location to create rhyme. Related to rhyme is the frequent use of symmetry in sign poetry. Symmetry is found when both hands have the same handshape and their location mirrors each other, like is found in the BSL signs AGREE or CROCODILE.

Rhythm
I am sure if you think back to the first poem you ever analysed at school you would remember having to mark stressed and unstressed syllables and count how many feet (groups of syllables) there were in a line. You may remember being told how sonnets are written in iambic pentameter supposedly to evoke the rhythm of a human heartbeat. Clever use of rhythm in spoken poetry can create a variety effects, such as echoing the canter of the cavalry in Tennyson’s ‘The Charge of the Light Brigade’ or the sound of a steam train thundering along railroad tracks in Auden’s ‘Night Mail’. Similarly, sign language poets can alter the speed and stress of signs to create a certain rhythm. For example, a poem about lying in the sun might have slow languid movements but a poem about running away from a tiger would have sharp hurried movements. Watch Jolanta Lapiak’s ‘The Moon in my Bedroom’ to see how she uses rhythm to create a relaxing night-time scene.

Literary Devices
Sign language poetry is capable of employing all the same literary devices as spoken language poetry. For example, poems often have allegorical meanings, especially related to how Deaf culture is treated by the hearing population. Another literary device often employed in sign poetry is anthropomorphism as it is possible for a signer to role shift and ‘become’ a particular creature in the narrative (see this device being used in Richard Carter’s poem ‘Deaf Trees’, below). Sign language poems can also use irony, hyperbole, understatement and a whole host of other literary devices to relay their message in visually striking way.

The wonderful thing about this sign language poetry is that even if you do not know the language being used, you can still appreciate the visual imagery and decipher a certain amount of meaning. I hope that this post has encouraged you to go out and explore this dynamic medium. If you are interested in more of Richard Carter’s poetry, visit his website. If you would like to learn more about Jolanta Lapiak’s work, see her website. For more insight into sign language poetry, look for Dr Rachel Sutton-Spence’s wonderful book (written with Paddy Ladd and Gillian Rudd) ‘Analysing Sign Language Poetry‘ (Basingstoke: Palgrave Macmillan, 2004).

Airing our dirty laundry

Recently I was at an excellent xprag.de workshop on methodology in Berlin. One of the themes that kept cropping up was the need to ‘air our dirty laundry’ – to share the studies that didn’t quite work out the way we expected, that maybe told us nothing at all (apart from the fact that we’d come up with a not-so-great design), that certainly won’t be published. But that doesn’t mean they were a waste of time, because by learning from them and not keeping those lessons to ourselves we can make a – perhaps teeny tiny but not unimportant – contribution to progress in our corner of Linguistics (or wherever you happen to find yourself in academia).

So here’s my contribution to this mission.

As I mentioned here, my PhD research is (partly) about how children develop the ability to make pragmatic inferences, and particularly what us linguists call implicatures – meaning that the speaker implies (or the hearer infers) beyond the literal meaning of the speaker’s utterance. Here are a couple of classic cases (where +> indicates implicated meaning).

  • Bob: Did you meet her parents?
    Barry: I met her dad.
    +> I met only her dad (and not her mum)
  • Bob: Did you eat the cookies?
    Barry: I ate some cookies.
    +> I ate only some cookies (but not all of them)

Now, the crucial thing that linguists who would identify as some sort of Gricean would maintain, is that these inferences don’t take place in a vacuum without any regard for who the speaker is, what he’s like or what he knows; rather, the hearer (and speaker) pay attention to the context, what she and the speaker mutually know, whether the speaker is co-operative (truthful, informative and using ‘normal’ language for the situation), and knowledgeable about what they’re saying. On that basis, the hearer makes some sort of inference about the speaker’s intended meaning:
Implicaturereasoning2

Conversely, if any of those assumptions (1, 2, 6) aren’t met, then the implicature won’t go through, or so the story goes.

Indeed, a few studies have shown that knowing that the speaker is at least partially ignorant about a situation they’re describing reduces the rate at which adult hearers make such inferences. For example, the difference between:

  • At my client’s request, I meticulously compiled the investment report. Some of the real estate investments lost money.

and

  • At my client’s request, I skimmed the investment report. Some of the real estate investments lost money. (Bergen & Grodner, 2012)

has an effect on the reading speed of critical segments (from which the presence of an implicature inference in the first but not the second case can be deduced).

We also know that young children are sensitive to how reliable speakers are when learning new words (which arguably might involve some sort of pragmaticky inference too). So what my study aimed to do was to find out whether children, in my case 5-year-olds, would be sensitive to the speaker’s co-operativity and whether this would affect the rate at which they made implicature inferences.

First of all children were introduced to a character, lets call her Sally, and listened and watched as she showed herself to be both an under-informative and irrelevant speaker or hearer.

Slide1 Slide2

Then the children listened to some more stories about Sally, with the experimenter (aka me) telling the story, and Sally ‘interrupting’ with each critical sentence which could implicate something. Their task was to pick the picture that went with the story for each of these sentences.

Slide1

The hypothesis was that if children notice and take account of the fact that Sally is unco-operative, then they would not infer the potential implicatures and choose the picture that reflects an implicated meaning, while picture-choice for straightforward control sentences, where only the literal meaning is available, should be unaffected1.

So what happened? Kids certainly noticed (sometimes with a bit of experimenter prompting) that Sally was an odd communicator. When they had to choose a picture based on something Sally had said that was underinformative and did not distinguish between the two options available, they were rightly stumped. But when it came to the test phase, where one picture displayed the implicated meaning, and one the literal meaning, they seemed to forget all about this. Or rather, they seemed to be very relieved that now they could get on with their task without the puppet causing problems! Only in one case did a girl decide that Sally must persistently mean something different to what she said, but this was applied blanket to everything including the control items.

Why did this happen? Does it tell us something about how kids are different from adults (e.g., not able to keep track of speaker traits because of shorter attention span or memory)? Unlikely. I think a more probable explanation here is the task: the children were told to choose a picture that matched the story and that was their main goal. So they would use any strategy to achieve that, even if that meant disregarding things they knew about the speaker to derive a pragmatic inference ‘as per normal’. Prior experience, and therefore expectations, may have played a role too: for example, quite a few children were very clear that ‘some’ means ‘not all’ (contrary to the view of pragmaticians in this field).

The lesson here is to think about the experience of the participant in the task, and how the goals of the task interact with principles of communication, like Grice’s co-operativity. Are they in concert or are they opposing forces that the listener-participant has to resolve? Is it (just) something linguistic that we’re asking participants to do, or so some higher level or conscious reasoning?

As for me now, it’s back to the drawing board.

1. More specifically, in one version, there was a training phase with 6 items in which the character was either an unco-operative speaker or an unco-operative hearer, in stating or inferring under-informative or irrelevant content. There was then a test phase with 2 stories (4 in the full version), each containing 8 items, 3 of which tested scalar, ad hoc and relevance implicatures, 3 of which were controls, and 2 of which were under-informative (as a reminder of the character’s unco-operative nature).

References
Bergen, L., & Grodner, D. J. (2012). Speaker knowledge influences the comprehension of pragmatic inferences. Journal of Experimental Psychology: Learning, Memory, and Cognition, 38(5), 1450.
Sobel, D. M., Sedivy, J., Buchanan, D. W., & Hennessy, R. (2012). Speaker reliability in preschoolers’ inferences about the meanings of novel words. Journal of Child Language, 39(01), 90–104.

Grow your own sentences

If you’re interested in language, chances are you’ve wondered about things like how we put bits of language –sounds, meanings and words– together to create some larger expression of communicative meaning. In other words, you’ve probably wondered at some point about how to make a sentence. If you’re been particularly keen and tried to read up about how linguists examine sentences, you’ve probably come across a bunch of funny-looking, often intimidating diagrams known as ‘syntax trees’. Unsurprisingly, many people are put off by the perceived complexity of a syntax tree, and are thus unable to go much further in their quest to understand how we make sentences. This post aims to resolve this problem by showing you the basics of how to grow your own syntax tree.

Why would I want to grow my own syntax tree?

Trees are important because they help us to understand how (and perhaps even why) we put linguistic items together when creating a sentence, since the pattern underlying a sentence isn’t necessarily the same as what we see on the surface. For example, Groucho Marx’s famous line ‘I shot an elephant in my pyjamas’ is dependent on two different syntactic structures for the humorous double meaning, enabling the one-liner ‘and how he got into my pyjamas I’ll never know’. If you can ‘grow’ your own sentence tree, you’ll be able to map out the patterns, and therefore uncover the underlying differences in these structures, for yourself.

Sowing the seeds: the basics

treeFertilization

A syntax ‘tree’

Before we start, it’s worth remembering that the theory behind growing a syntax tree is still a work in progress; although linguists tend to agree on some fundamentals, there’s not necessarily a right or a wrong way of sowing the seeds and pruning the tree. Here, our basic syntax tree has the following: a verb-based “root”, a tense “trunk” and a sentence “crown”.

Our tree is anchored by its roots: the verb from which the rest of the sentence grows. A verbal ‘root’ (“VP”; ‘P’ stands for ‘phrase’) in English could be: grow, inspect, calculate, shake, wriggle.

Of course, we can’t have a tree without a trunk: likewise, we can’t have a verb unless it is appropriately modified to illustrate tense (or similar inflection). To illustrate tense (“TP”), we might need to modify a verb to show that is an infinitive, e.g. to grow, to inspect; or a present, past or future tense: (he) shakes, calculated, will wriggle.

Tree Structure

Tree Structure template

Finally, just as a crown tops off a tree, in a syntax tree, the ‘crown’ (“CP”) tops off the structure and tells us (amongst other things) what type of sentence we’re dealing with. Question words (e.g. ‘how many?’) and subordinators (e.g. ‘that’, ‘if’) go here, indicating ‘interrogative’ and ‘embedded sentence’ respectively. For the basic trees we’re growing here, we don’t need a CP, but in a real-life sentential forest, you’d of course want to sow sentential seeds that will grow into different types of sentences.

From seeds to sapling: your first syntax tree

All units are grouped together in twos, and are represented by a binary ‘branch’ (a triangle without the bottom line) in the tree. The more our sentence grows, the more the branches on our tree grow.

'Tense' and 'Verb' slots filled in

‘Tense’ and ‘Verb’ slots filled in

The three-level structure CP-TP-VP gives us our core sentence (or ‘tree’) template, but you’ve probably noticed there’s something missing: the ‘actors’ that take part in the ‘scene’ described by the verb, e.g. she will eat a cake. Now, in the surface structure, she is higher than the tense (will) and the verb (eat), but, as you might agree, the participants are a pretty crucial part of the sentence. A participant doesn’t denote sentence type (CP) or tense (TP). Instead, sentence participants are involved in telling us who does what to whom.

We already know that the ‘doing what’ is illustrated by the verb (i.e. ‘doing’ is the action, and ‘what’ is the actual meaning of a verb), anchoring the ‘root’ of our sentence. It stands to reason that the who and to whom also anchor the sentence within its ‘roots’. Indeed, there is a lot of cross-linguistic evidence for this, but all we need to know for now in order to grow our tree is that the participants – the subject (the “who”) and the object (the “(to) whom”) – originate at the roots of our tree, too.

Since all units are grouped together in twos, we next need to work out what groups together with what: in a sentence like ‘she will eat a cake’, the verb can only group first with either the subject (‘she’) or the object (‘a cake’). This initial grouping can then group with whatever’s left over, to form a larger unit. To work out what groups with what, we can ask the following questions:

Question: What will she do?

Answer: Eat a cake. ✓

 

Question: What will happen to the cake?

Answer: She will eat.  

Answer: She will eat it. ✓

From the above questions, we can tell that ‘eat a cake’ (i.e. verb + object) form a complete group, whereas ‘she will eat’ (i.e. subject + verb) do not. We need to substitute in ‘it’ to complete the latter phrase, i.e. we need to put an object in to make it work. This suggests that the verb + object combination is our first grouping because it can stand alone (subject + verb cannot). The subject + (verb + object) combination must be our second grouping. Indeed, ‘she will eat a cake’ can stand alone as a functioning phrase, as exemplified by the following question:

Question: what will happen?

Answer: She will eat a cake.  ✓

Our sentence’s root structure must look like this:

Of course, ‘she’ is now in the wrong place for the sentence we are growing (note that we have grown a simple question though!). We must therefore look for somewhere where ‘she’ can move to in order for the sentence to make sense. There is only one slot remaining: just above ‘will’. And that gives us the correct surface order:

'She' moves from its origin in the VP to above the tense slot ('will'), in order to grow the correct surface structure

And there you have it. We have grown a syntax tree! Why not have a go and see if you can grow your own simple sentences? Here are some you might like to try:

  • He will dance the tango.
  • She would play a game.
  • We have read a book.
  • She shall have music.
  • You have met my mother.
  • Groucho had shot an elephant.

Does anyone speak the Queen’s English?

buckingham-palace-21976488In 2000, Jonathan Harrington and his colleagues at Macquarie University in Sydney wrote a series of publications on the Queen’s English. Literally. They compared a number of vowel sounds produced by the Queen in her annual Christmas messages from the 1950s to the same vowel sounds produced in the 1980s, and used female BBC broadcasters speaking standard Southern British English (SSBE) as a control group. The idea was to observe whether the Queen’s speech had changed over those 30 years, and whether it had moved closer to the English used by the control group. Their results indicated that not only had the Queen’s English changed quite substantially, it had changed in the direction of – though not reaching – the standard English produced by news broadcasters in the 1980s. Conclusion: the Queen no longer speaks the Queen’s English of the 1950s.

The articles, of course, sparked a lot of media interest. But is it really so strange that the Queen’s speech has changed? Firstly, with age, physiological changes to the vocal chords and vocal tract inevitably lead to changes in the voice. So the Queen’s pitch is physiologically speaking bound to have been lower in 1987 than when she was 30 years younger. Similar changes to the resonances of the vocal tract would have influenced the measures taken by Harrington and his colleagues. And secondly, language itself is not a stagnant entity. The way English is being spoken in the UK changes over time, as does the speech of smaller speech communities such as the royal family. Not even the Queen’s aristocratic English is immune to this tendency.

Does that mean the Queen will eventually end up sounding like the rest of us? The answer is, in all likelihood, no. While her speech in the 1980s does not sound quite as cut-glass as the broadcast from the 1950s, it still sounds unmistakably upperclass. Think of it this way: both her English and the SSBE English of the middle-class public are changing, so although her vowels are likely to continue to move towards Harrington et al.’s 1980s SSBE targets, the rest of us have long stopped sounding like that. In other words, she will most likely continue to speak the Queen’s English, it’s just that the Queen’s English, like any other language variety, is not likely to stay the same over time.

So what exactly has changed from the 1950s to the 1980s? If you listen to the two YouTube clips below, you’ll notice a wealth of interesting phonetic phenomena. For instance, in the clip from 1957, notice how she says the word “often” (/ɔːfən/, or orfen, around 0:55 in the clip), whereas in the 1987 Christmas message has her saying something closer to /ɒfən/ (or ofen) in the word “often” (at 2:33). Similarly, in the early clip her /uː/ vowel in “you” and “too” is very back, whereas in the later clip it’s more fronted, that is, closer to the vowel the rest of us are likely to produce. Another interesting feature to look out for is the second vowel in the word “happY”, which is produced like the vowel in “kit” in the early clip (e.g. “historY” at 1:22), but closer to the /i:/ vowel in the word “fleece” in the later clip. This latter point is further described and discussed in a later paper by Harrington and his colleagues (Harrington et al. 2006).

If you’re interested in reading more on the Queen’s English, here’s the link to a brief and non-technical paper in Nature, and here’s the longer and more phoneticsy full paper from the Journal of the International Phonetic Association.

(Thanks to Adrian Leemann, who presented Harrington et al.’s work at our Phonetics and Phonology reading group, thus providing the material for this blog post).

The Language of Light — how we imitate light with sounds

The inspiration for this blog post comes from a presentation I gave at this year’s Cambridge Science Festival (you can find my presentation here). The topic of the festival was Light and I took this as an opportunity to talk about how words in a language come to have certain meanings — and why in English we seem to use certain sounds when we refer to phenomena that have to do with light.

What does this sunset sound like?

What does this sunset sound like?

And this reflection?

And this reflection?

Consider the pictures on the right. Both of these pictures show certain types of light — I hope that you would agree that a fitting word to describe the sunset in English would be glow. In the second picture, the kind of reflection on the water can be described by the word glisten. We can repeat this with other kinds of light and chances are that you’d use words like gleamglitter, etc. So could we go ahead and suggest that words that start with the sounds [gl] always refer to light in some sense? And why would this be the case?

To explore this question, we can go ahead and formulate a hypothesis about English and see whether it is true.

Some sounds, like [gl], refer to the concept of “light”.

Signs: arbitrary and conventionalised

Before testing whether this hypothesis is true, we need to get an idea about how sounds of words and the meanings of words are put together. This is where things get interesting. The Swiss linguist Ferdinand de Saussure famously argued that the connection between sound (i.e. the way a word sounds, a signifier) and meaning (i.e. the concept a word refers to, a signified) is arbitrary.

This means that there is no inherent connection between a word like tree and the concept it refers to. There is evidence for this: the concept “tree” is referred to differently across languages and people generally don’t have any trouble associating different sounds with the same concept. (A lot more can be said about this, obviously.)

This can be taken further: the linguist André Martinet coined the notion of “double articulation” or “duality of patterning”. This means that meaningful words are made up of smaller units that are meaningless. Consider again the word tree. It is made up of three phonemes, a /t/, a /r/, and an /i:/. It is difficult to argue that /t/ refers to the trunk, /r/ the leaves, and /i:/ to the branches of a tree — or any other combination. Yet when put together in a certain order, speakers of English understand this sequence of sounds to refer to a particular concept involving a trunk, leaves, and branches (a very prototypical kind of tree).

Now this leaves us in a pickle: if de Saussure and Martinet are right, there is no way that a combination of sounds like [gl] can refer to any semantic concept: [gl] is not a “word” on its own, it is merely a sequence of sounds that do not have their own meaning. So why do we find these sounds showing up again and again in words referring to light? Before tackling that question, let’s look at a corner of language in which de Saussure’s arbitrariness is less pronounced (pun intended).

De Saussure was aware that not all signs are completely arbitrary: certain sounds are iconic. This means that the way they sound reflects the concept that they refer to. A well-known phenomenon of this sort is onomatopoeia (for lovers of Wikipedia lists, I recommend this gem). Onomatopoeia is interesting because it seems to provide counterexamples to the claim that all signs are arbitrary: is it a coincidence that words referring to snoring across languages have [r], [k] and [s] sounds in them?

These sounds obviously try to imitate the extralinguistic phenomena that they refer to, so in this sense they are not arbitrary. But they are not fully iconic either: as de Saussure also suggested, arbitrariness is one side of the sign-meaning coin. The flip side is “conventionalisation”. This means that while the sign-meaning connection is often arbitrary, it is not fully arbitrary. Not everyone makes up their own arbitrary sounds when they refer to trees! A language community uses certain sounds that are generally agreed upon to refer to a certain concept by convention.

Interestingly, the same holds for onomatopoeia. While snore might sound more like snoring than tree sounds like, err, a tree, when using the concept of snoring as a verb, speakers of English cannot help but use the conventionalised verb snore rather than merely imitate a snoring sound. The same force of conventionalisation can also be seen when comparing how different languages express animal noises, a classic example of onomatopoeia. In Hungarian, a pig’s grunt is referred to by röf röf. In English, it is oink oink. Both of these sounds are said to be iconic, yet they don’t even share a single sound! Part of the reason is that languages have different inventories of sounds, so röf röf wouldn’t even be a possible English word.

In an English speaking community, I cannot go around and expect people to understand what I’m referring to if I say röf röf, even if it sounds more like a pig than oink. In Hungarian and English, respectively, these two different sounds are the conventionalised ways of referring to pigs’ sounds (listen to a pig here).

Is there a language of light, then?

Where does this leave us with respect to light? We have seen so far that arbitrariness and conventionalisation exert a certain force on words in a language. There is an elephant in the room here, however. If onomatopoeia refers to words that sound like what they describe, how can we talk about iconic words referring to light? Light does not “sound” at all, so why should something like [gl] refer to light? How can a sound be iconic and refer to something visual?

An answer here is that this is due to history: in Proto-Indo-European, a hypothetical language spoken some 5500 years ago, a word like *ghel- (the “*” indicates that this is a reconstructed form)  meant shine (http://etymonline.com/index.php?term=glow). This word was very successful and has survived thousands of years in forms like glow, yellow and others like glass, glitter, etc. So this is where the connection between [gl] and light in English comes from.

Speakers of English (or any other language), however, are not living etymological dictionaries, however, and this piece of information has to be looked up. Thanks to scores of Indo-Europeanists, we can actually do that (go and thank your favourite Indo-Europeanist now!). But what speakers of a community can do is associate a sound like [gl] with a concept like “light” because there are many words that sound similar and have similar meanings.

Back to our hypothesis: do sounds like [gl] refer to light? In a way, they can, but at the same time, words with [gl] do not have to refer to anything that has to do with light. Think glove. So our hypothesis is wrong, although we have seen that it is not straightforwardly wrong, and that there is a lot to say about “the language of light”.

One last thing — sign language!

Just like it might be strange to think about spoken language words imitating soundless phenomena like light, one can ask how onomatopoeia and iconicity work in sign languages. Isn’t sign language more likely to have an iconic expression for tree than spoken language is? A tree, after all, can be sensed visually more then by listening to it. At the same time, what about sign language and animal noises? Look at the following videos and see whether you can find out which one of these British sign language (BSL) refers to “light”.

Did you manage? In case you didn’t: don’t worry — sign languages use expressions that are arbitrary and conventionalised just like spoken languages. Like all other languages, they have some iconic expressions, but again, these have to follow certain conventions! (The first sign actually means tree in BSL, the second is sun or sunshine. The videos are all from the UCL British Sign Language SignBank.)

To sum up: all languages use signs that have a somewhat arbitrary connection between sound and meaning. But the degree to which this is the case varies, and certain parts of a language’s vocabulary are more iconic than others. Whatever the connection between sound and meaning, spoken as well as sign languages make up meaningful expressions from smaller, meaningless units, sounds and gestures, respectively.

So next time you hear a word that starts with [gl], you can think of Ferdinand de Saussure, André Martinet and this blog post and be fascinated by how we create meaning (and light) out of thin air!

:)

A few weeks ago I think – though I can remember almost none of the details – I came across an article complaining about how emoticons (or “smileys”) were a terrible thing, symptomatic of declining standards of literacy and probably going to single-handedly bring about the end of civilisation within the next decade, or something like that. Even accounting for the slight possibility that I imagined the whole thing (possibly in a dream resulting from excessive exposure to linguistics), you don’t have to look far on the Internet to find not entirely dissimilar sentiments.

By “emoticons” I refer to those typographic representations of emotions (and occasionally other things), usually in the form of a stylised facial expression – e.g.

smileys

– but often converted into (sometimes quite badly designed) “actual” images depending on what social medium, text messaging service etc. you are using at the time.

So, are emoticons a portent of the apocalypse? I am inclined to think they are rather a fairly natural development of the way written language works. Compare ordinary punctuation: full stops and commas and things like that. When our alphabet was first being developed, people didn’t use punctuation at all, and even once they began to the details of their usage were very variable for a very long time. Something resembling our modern system of punctuation, with its complex (and disputed) rules about where or where not to put commas and its diverse range of standardised symbols including colons, semicolons, dashes, brackets, quotation marks etc. etc., only really arises around five hundred years ago. Many other writing systems continue to use little or no punctuation, and the symbols they do use are often only recently borrowed from the West.

So – in spite of contrived examples about the consumption of grandmothers or vegetation – people are demonstrably perfectly capable of getting on fine without punctuation. But nowadays nearly everybody uses punctuation unless they are simply ignorant of the rules or being deliberately avant garde. Why? Partially because punctuation, while not necessary for communicating effectively, is nevertheless useful.

What has this got to do with emoticons? Well, I think the same thing applies. We don’t absolutely need them, which is why some people say things like “I find it lazy. Are your words not enough?“. There are always going to be emoticon-free ways of communicating the same message, just as I could no doubt find a way to write this post without using any commas without hampering its intelligibility, if I so desired. But at the same time emoticons are helpful: it’s nice to be able to say “I am feeling happy/sad/confused about this” or “I am joking”, or whatever, without having to write out all those words, just as using commas gives me an advantage in communicating how a sentence should be parsed.

In our writing, the letters of the alphabet only communicate what linguists call the “segmental” aspects of our speech. Now, this is perfectly adequate to convey much meaning – indeed, many writing systems (e.g. Arabic) get away with even less, by omitting vowels and only writing consonants. But there are still other things which spoken language gets across, known as “suprasegmental” aspects. Some of these can be conveyed through punctuation: for instance full stops and commas tell us something about sentence structure, something which in speech is conveyed through pauses and intonation. Question and exclamation marks, likewise, capture something of intonation patterns which are otherwise lost in writing.

Punctuation, however, still doesn’t allow writing to easily communicate everything we regularly communicate in speech. For example, you can usually tell just from listening to someone (even on the telephone, in the absence of visual cues) how they they feel about what they’re talking about, or if they are joking or not. In writing, it can be a lot harder. Emoticons go some way towards mitigating this problem: a smiling face expresses much the same thing as a pleased-sounding tone of voice, just as a question mark corresponds to a questioning intonation. This is particularly important in the context of the new, rapid exchange type of writing (in function more closely resembling speech) that has arisen in the advent of new technologies.

So – in spite of the fact that those who get annoyed by emoticons are probably also those most likely to get annoyed by absent punctuation – emoticons and punctuation are ultimately doing basically the same thing: capturing something of spoken language which is otherwise lost in writing. They enhance our communicative abilities, rather than impairing them.

So they probably aren’t something worth getting annoyed about. :)

The that-trace effect: What is it and why is it interesting?

What is the that-trace effect?

In English, the subordinating conjunction that is often optional.

(1)        You think that John kissed Mary.

(2)        You think John kissed Mary.

(1) and (2) are both acceptable sentences in English: that is present in (1) but absent in (2).

When we ask a question about an element inside the subordinate clause, that usually remains optional, as in (3) and (4). Note how who(m) appears in sentence-initial position. However, we still intuitively feel that, in this particular example, it is the direct object of kissed. Since direct objects in English follow the relevant verb (Mary follows kissed in (1) and (2)), we can capture this intuition by putting a trace of who(m), represented as twho(m), in the position just after kissed.

(3)        Who(m) do you think that John kissed twho(m)?

(4)        Who(m) do you think John kissed twho(m)?

However, there are instances when that is not optional. When we ask a question about the subject of the subordinate clause (corresponding to John in all the examples so far), that must be absent (* means that the sentence is unacceptable).

(5)        *Who do you think that twho kissed Mary?

(6)        Who do you think twho kissed Mary?

The unacceptable configuration involves that followed immediately by a trace, hence this effect is called the that-trace effect (Perlmutter, 1968).

Why is the that-trace effect interesting?

The that-trace effect is interesting in a number of respects, but I’ll just mention two of them. The first is the question of how we, as English speakers, come to ‘know’ that there is a contrast between (5) and (6) given that that is generally optional as we saw in (1) and (2), and (3) and (4). Unless you’ve studied syntax, you’ve probably never been explicitly taught that there exists a that-trace effect in English at all. So how do we learn such an effect? Phillips (2013) looks at how frequent examples like (3-6) are in a corpus of speech directed at children. This is what he found (Phillips, 2013: 144):

(7)

a. Who do you think that John met __?               2 / 11,308

b. Who do you think John met __?                     159 / 11,308

c. *Who do you think that __ left?                        0 / 11,308

d. Who do you think __ left?                               13 / 11,308

The corpus contains 11,308 examples of wh-questions (i.e. questions involving the wh-phrases who, what, etc.). Out of the 11,308 examples, there were no examples of the form in (7c), i.e. cases where the subject of the subordinate clause is questioned. This is the configuration that English speakers judge unacceptable. What is particularly interesting is (7a). Out of the 11,308 examples, there were only two tokens where that is present and the direct object of the subordinate clause has been questioned. Yet speakers judge such sentences as acceptable. If examples like (7a) are so rare, why don’t speakers hypothesise that (7c) just happens to be very rare as well? Alternatively, given how rare it is to find that in wh-questions, why don’t speakers hypothesise that that is generally impossible in wh-questions? Either way, it is quite difficult to see how the contrast between (5) and (6) (or (7c) and (7d)) can be acquired purely from child-directed speech. We thus hypothesise that there is something about the way the syntax (of English) works that allows us to ‘know’ about the that-trace effect. This is a classic argument based on the poverty of the stimulus.

The second point of interest comes from the fact that English has a that-trace effect as well as an anti-that-trace effect. The anti-that-trace effect can be seen in relative clauses. In English, we can form relative clauses using that. In general, that is optional in relative clauses just as it is in (1-4) above (we use traces again and the relative clause is in boldface).

(8)        The woman that John kissed twoman is called Mary.

(9)        The woman John kissed twoman is called Mary.

In (8) and (9) we have relativised a direct object; woman is interpreted as the direct object of kissed inside the relative clause.

Now, if we relativise a subject, that is no longer optional. In such cases, that is obligatory.

(10)      The man that tman kissed Mary is called John.

(11)      *The man tman kissed Mary is called John.

Once again there is something special about the relationship between that and the subject of the subordinate clause. However, the effect in (10) and (11) is the exact opposite of the that-trace effect seen in (5) and (6)! As seen in (5), that immediately follow by a trace is unacceptable; that must be absent, as in (6). In (10) and (11), the situation is reversed. As seen in (10), that immediately followed by a trace is acceptable; the absence of that results in unacceptability, as in (11). We thus call the effect in (10) and (11), the anti-that-trace effect.

The problem for us, then, is that there is something about the syntax of English that allows us to ‘know’ that the that-trace effect exists, but which also allows the existence of its opposite, the anti-that-trace effect. The challenge, which I am working on at the moment, is to find out what that something is!

References

Perlmutter, D. M. (1968). Deep and surface structure constraints in syntax. Doctoral dissertation, MIT.

Phillips, C. (2013). On the nature of island constraints II: Language learning and innateness. In J. Sprouse & N. Hornstein (Eds.), Experimental Syntax and Island Effects (pp. 132–157). Cambridge: Cambridge University Press.

An ambiguous question

A couple of weeks ago my college held a graduate symposium, and it was a broad-ranging and very interesting day, with presentations from computer science, psychology, physics, classics, and even a ‘three-minute thesis’ on lucid dreaming. A colleague of mine gave a talk entitled, ‘Telling time: time in the worlds’ languages’ (the theme was time – a linguists’ bounty!), in which he gave us a whistle-stop tour of languages with grammatically encoded aspect1, languages with three tenses (like English), with just two tenses (nonpast and past, or non-future and future), and with no tenses at all. Languages like Chinese varieties don’t have any grammatical inflection to indicate tense (like we have in English, by adding -ed to regular verbs in the past tense, for example).

Alice_05a-1116x1492

Sir John Tenniel’s illustration of the Caterpillar for Lewis Carroll’s Alice’s Adventures in Wonderland

In the question time, various members of the audience voiced surprise at how this might be. How can speakers of languages without tenses talk about time? How on earth do they get along? The intuition here, nurtured by our own linguistic experience, is that the ambiguity of tenseless sentences would be unsurmountable. If someone hears a sentence like ‘John eat cake’, which could mean John eats cake, John will eat cake, John ate cake, John had eaten cake, and so on, how are they to know which meaning is intended?

This raises the question, just how rife is ambiguity in language? And is it all that terrible anyway? Of course, in fine rhetoric, technical writing, and especially legal language, there is good reason to avoid ambiguity. While in other corners of language, such as poetry, there is reason to embrace it:

Never seek to tell thy love
Love that never told can be
For the gentle wind does move
Silently invisibly
(William Blake, referenced by Grice, 1975)

In lexical semantics, we try to carefully distinguish ambiguity from polysemy and vagueness (although you can probably think of examples where they mesh).

  • Ambiguity occurs when a form has two (or more) meanings, e.g., He went to the bank to relax with some fishing / to sort out a mortgage. (So, strictly speaking, our Chinese example above is not a case of ambiguity in a technical sense, more underspecification.)
  • Polysemy refers to a single word with more than one sense, a cluster of related meanings. e.g., I buy the newspaper that your dad works at. (clue: product and institution)
  • Vagueness concerns borderline cases. How few hairs does a man have to have to count as bald? How tall does a ‘tall man’ have to be?

And when you start looking, ambiguity is all around you. Besides lexical ambiguity (some more examples include ‘run’, ‘port’, ‘rose’, ‘like’, ‘cleave’, ‘book’, and ‘light’), there’s syntactic ambiguity, where the structure of a sentence allows for more than one interpretation, of which some infamous examples are:

Visiting relatives can be boring.
The chicken is ready to eat.

And semantic scope ambiguity, like this:

Every Cambridge student has borrowed some books.
> every Cambridge student has borrowed some (or other) books
> there are some (particular) books that every Cambridge student has borrowed.

CC Martin

CC Martin (Flickr)

So it is beginning to look like (accidental) ambiguity may not be such a rare thing. But isn’t this a deficiency in our linguistic system? If hearers are constantly having to work out what speakers mean, isn’t that a lot of effort on their part? Well, I think that kind of worried response assumes that speakers and hearers are just like transmitters and decoders. Or, to put it another way, it follows the ‘mind as computer metaphor’  – programming languages do not contain ambiguity, after all. But that’s precisely because machines that parse code are rather different from sophisticated human minds that are able to make subtle and fast inferences about speakers’ intentions. (And, besides, this ambiguity doesn’t seem to have stopped us communicating pretty well so far).

Today I read an intriguing article by Piantadosi, Tily & Gibson which sets out two reasons why ambiguity is actually expected in a communicative system like language. Firstly, given that words and phrases occur in a context which is itself informative (the preceding discourse, the social and world context, the speaker and hearer’s background knowledge), disambiguating information encoded lexically could actually be redundant, and an efficient language will not convey redundant information. Secondly, they follow Zipfian principles that suggest that ambiguity may arise from a trade-off between ease of production (lazy speakers who want to say the minimum) and ease of comprehension (lazy hearers who want maximum clarity and minimum work of interpretation). But, importantly, the fact that it seems that production – articulation of utterances – seems to be ‘costly’ (whatever that means in terms of physical / psychological processes), while inference – interpreting potential ambiguity – seems to be relatively cheap, means that where an ‘easy’ word in terms of production has two distinct meanings that usually turn up in different contexts, this is an overall win for the linguistic system (compared to one ‘easy’ and one ‘hard’ word form for the two different meanings). Crucially, though, this relies on communicators who are adept at pragmatic inferences, as Grice and other pragmaticians have long proposed.

So coming back to our example of Chinese and other ‘poor’ languages without tense: besides other strategies they have to express temporality, like adverbs (today, yesterday, now, in the past etc), their speakers can safely assume that their hearers are able to make the necessary pragmatic inferences given the context to work out what the speakers intend to communicate, therefore avoiding ambiguity, and, perish the thought, miscommunication.

1 Aspect, roughly speaking, is how an event is viewed in relation to time. One common distinction is between perfective and imperfective, which is a bit like viewing an event from the outside as a complete whole (perfective) or zooming in on a part of it on the inside (imperfective). In English, this would be the difference between ‘John ate the cake’ and ‘John was eating the cake’.
References
Grice, H. P. (1975). Logic and conversation. In R. Stainton (Ed.), Perspectives in the Philosophy of Language (pp. 41–58). Broadview Press.
Piantadosi, S. T., Tily, H., & Gibson, E. (2012). The communicative function of ambiguity in language. Cognition, 122(3), 280–291.

What’s in a word (‘genocide’)?

Debate around the centenary of the Armenian ‘genocide’ on April 24th, 2015, has centred on the use of one specific word: ‘genocide’.
Several angles deserve consideration: history, politics / diplomacy, sociology, legal issues and indeed linguistics. Naturally, linguistic considerations are connected with others and in turn take on several facets: etymology (origin of the word), semantics (meaning), sociolinguistics (here identificational / social effects of language) and especially pragmatics (underlying implications, (intended) effects of word on the recipient). The latter, inferred / added meaning due to the context, is most relevant with the term ‘genocide’, as it extends into the political / legal (with its precise terminology).
From the origins of various “totemic” words in the semantic field (‘genocide’, ‘holocaust’ and ‘shoah’), one can identify (intended/implied) meanings and finally consider further implications.
‘Genocide’ is a hybrid formation from Greek γένος (genos, ‘race, people’), and Latin cīdere (‘kill’). It signifies the (intended) extermination of a whole people in a particular area and was coined around 1943-44 by Polish Jew Raphael Lemkin. (1) Notably, Lemkin created the term specifically with the Armenian (plus the Jewish) case in mind. Simplistically, the term ‘genocide’ thus automatically applies to the case. (2)

The Armenian Genocide memorial in Bikfaya, Lebanon. CC Serouj

The Armenian Genocide memorial in Bikfaya, Lebanon. CC Serouj

Before ‘genocide’, the term ‘holocaust’ was already used. It comes from Greek ὁλόκαυστον (holokauston, “something wholly burnt”), originally used in the Greek Bible version to signify “burnt offerings”. It underwent semantic change to mean “massacre / total destruction”. It was possibly first associated with burning people, as the use by journalist Leitch Ritchie in 1833 suggests: 1300 people were burnt in a church in Vitry-le-François in 1142. The word underwent extension of meaning to other cases / methods of killing. Contemporaries of the Ottoman atrocities, including Churchill, used it to refer to the Armenian case. In the aftermath of World War II (the killing of over 6 million Jews and others by the Nazis), the term first got applied specifically to these events, mainly from the 1950s onwards, to translate Hebrew ‘shoah’. Today, ‘the Holocaust’ is generally directly associated with that particular Jewish holocaust. The term is in the process of (semantic) restriction / specification (i.e. not as yet exclusive to that context).
In Israel, the word used is השואה ‘ha shoah’ (originally meaning ‘destruction’ and ‘calamity’ in Hebrew, it reflects the experience of the Jewish people, arguably the most traumatising and horrific experience imaginable). It is also preferred by many scholars since its usage (in Hebrew) precedes the use of ‘holocaust’ and it can be seen a sign of respect to use the term used by the victims of the crime against humanity. Additionally, however, some perceive a certain inappropriateness of the term ‘holocaust’ on a historical / theological and hence pragmatic level with the original meaning of a “burnt offering” to God. Jewish-American historian Laqueur reckons “it was not the intention of the Nazis to make a sacrifice of this kind and the position of the Jews was not that of a ritual victim”. Interestingly, the Armenians prefer a similar term (Մեծ Եղեռն – Medz Yeghern: “Great Evil-Crime”, often rendered as “Great Catastrophe”) over the loan translation of ‘genocide’ (Հայոց ցեղասպանություն – hayots tseghaspanutyun). (3) The Armenian term accuses the perpetrators even more clearly than shoah does. (The danger of either relativising the experiences by comparing them or isolating the events in a general humanity context by seeing them as unique is another issue.)
On the pragmatic level, US President Obama used the Armenian word in his commemoration speech. Reactions were mixed, some suggesting that he was diplomatically trying to avoid the term ‘genocide’ by using the native yet internationally unintelligible term. Either way, avoidance of the term ‘genocide’ was clearly an elegant way to get out of a diplomatic dilemma.
Some say denying a crime like genocide further victimises / dehumanises the victims and their descendants by not paying them the respect of acknowledging their suffering. This, the perception / effect of words, is where the use of the ‘right’ language is crucial. Legal implications are another issue: Were Turkey to use the word ‘genocide’ in official capacity, they would expose themselves to renewed reparation and land-return claims – similar to Greek Prime Minister Tsipras’s recent demands for reparations from World War II from Geman Chancellor Merkel. Freedom of speech is another sensitive topic on both sides: Orhan Pamuk (Turkish Nobel Prize laureate) was charged for insulting the state by publicly acknowledging the ‘genocide’, while conversely there is an ongoing international process between Doğu Perinçek and Switzerland for his being convicted for maintaining that the events did not constitute ‘genocide’ (in its common definition), emphasising the importance of precise language use, especially in a legal context.
Like Obama in terms of picking words carefully, German President Gauck in a remembrance speech all but labelled the events ‘genocide’, at the same time shrewdly abstracting from the individual case so as not to explicitly equate it with the term ‘genocide’. (4) Pragmatically (and diplomatically), these solutions leave enough room for interpretation through the use of a specific word, either in a particular language or in a proposition (statement) and implicational context.
A historically noteworthy fact that refers to the relevance of language as an identificational tool is that in the same context as the 1915 massacres happened, the turkification of the ‘nation’ and the new Turkish state, Atatürk, the Father of the Turks, also turkified the language soon after, which in Ottoman times had used the Arabic script, but also many Persian and Arabic words (today either obsolete or optional in ‘turkified’ Modern Turkish).
That this was a conscious process (at least linguistically) to create a Turkish identity, suggests the use of the term ‘genocide’ may ultimately be appropriate, and secondly that this ideological fact (the intrinsic link between the occurrences of 1915 and the establishment of the new, consciously Turkish state, compare the ongoing conflict with the Kurds) is part of the reason (beside legal / psychological ones) why Turkey remains reluctant to use the term ‘genocide’.

 
Some relevant quotations:
(1) “By ‘genocide’ we mean the destruction of a nation or of an ethnic group. This new word, coined by the author to denote an old practice in its modern development, (…) does not necessarily mean the immediate destruction of a nation, except when accomplished by mass killings of all members of a nation. It is intended rather to signify a coordinated plan of different actions aiming at the destruction of essential foundations of the life of national groups, with the aim of annihilating the groups themselves. Genocide is directed against the national group as an entity, and the actions involved are directed against individuals, not in their individual capacity, but as members of the national group”.

(2) “I became interested in genocide because it happened so many times. It happened to the Armenians, then after the Armenians, Hitler took action.”

(3) Turkish has Ermeni Soykırımı, literally ‘Armenian race-massacre’.

(4) “The fate of the Armenians is exemplary for the history of mass destruction, ethnic cleansing, expulsions and indeed genocides, which marks the 20th century in such a horrific way.”