Decoding the ‘bird language’
Whistled speech has arisen in at least 80 languages around the world, especially in rugged, mountainous terrain or dense forests, where ordinary speech doesn’t carry far enough. People have developed whistled versions of the local languages in such regions with linguists believing that similar adaptations are more than just a curiosity
Far away in La Gomera and El Hierro in the Canary Islands, tourists can often hear locals communicating over long distances by whistling — not a tune, but the Spanish language Silbo, and are one of the last vestiges of a much more widespread use of whistled languages.
In at least 80 cultures worldwide, people have developed whistled versions of the local language when the circumstances call for it.
Whistled languages have had surprising stories as well. They have often flourished when there has been a need for secrecy — in Papua New Guinea during the Second World War, for example, when whistlers of the Wam language were recruited to transmit military messages via the radio to evade Japanese surveillance — or when they have proved useful in countering some new threat.
While visiting the foothills of the Himalayas, you may hear a remarkable duet ringing through the forest. To the untrained ear, it might sound like musicians warming up a strange instrument. In reality, the enchanting melody is the sound of two lovers talking in a secret, whistled language. Joining just a handful of other communities, the Hmong people can speak in whistles. The sounds normally allow farmers to chat across their fields and hunters to call to each in their forest.
By studying whistled languages, researchers hope to learn more about how our brains extract meaning from the complex sound patterns of speech. Whistling may even provide a glimpse of one of the most dramatic leaps forward in human evolution: the origin of language itself.
Whistled languages are almost always developed by traditional cultures that live in rugged, mountainous terrain or in dense forests. That’s because whistled speech carries much farther than ordinary speech or shouting, Julien Meyer, a linguist and bioacoustician at CNRS, the French National Research Centre, who explores the topic of whistled languages in the 2021 Annual Review of Linguistics, has been quoted as saying.
Skilled whistlers can reach 120 decibels — louder than a car horn — and their whistles pack most of this power into a frequency range of 1 to 4 kHz, which is above the pitch of most ambient noise.
As a result, whistled speech can be understood up to 10 times as far away as ordinary shouting can. That lets people communicate even when they cannot easily approach close enough to shout. On La Gomera, for example, a few traditional shepherds still whistle to one another across mountain valleys that could take hours to cross.
Whistled languages work because many of the key elements of speech can be mimicked in a whistle. Meyer says: “We distinguish one speech sound, or phoneme, from another by subtle differences in their sound frequency patterns. A vowel such as a long e, for example, is formed higher in the mouth than a long o, giving it a higher sound. It’s not pitch, exactly… instead; it’s a more complex change in sound quality, or timbre, which is easily conveyed in a whistle.”
Consonants, too, can be whistled. A t, for example, is richer in higher frequencies than k, which gives the two sounds a different timbre, and there are also subtle differences that arise from movements of the tongue. Whistlers can capture all of these distinctions by varying the pitch and articulation of their whistle. And the skill can be adapted to any language, even those that have no tradition of whistling.
For centuries, shepherds from the small village of Aas in the French Pyrenees led their sheep and cattle up to mountain pastures for the summer months. To ease the solitude, they would communicate with each other or with the village below in a whistled form of the local Gascon dialect, transmitting and receiving information accurately over distances of up to 10 kilometres.
They “spoke” in simple phrases – “What’s the time?”, “Come and eat,”, “Bring the sheep home” – but each word and syllable was articulated as in speech. Outsiders often mistook the whistling for simple signalling (“I’m over here!”), and the irony is that the world of academia only realised its oversight around the middle of the 20th century, just as the whistled language of Aas was dying on the lips of its last speakers.
Around 80 whistled languages have been reported around the world to date, of which roughly half have been recorded or studied, and Meyer says there are likely to be others that are either extant but unrecorded or that went extinct before any outsider logged them. As he explained in a review, they exist on every inhabited continent, usually where traditional rural lifestyles persist, and in places where the terrain makes long-distance communication both difficult and necessary – high mountains, for example, or dense forest.
Researchers think that those interested in language evolution should pay more attention to whistled languages since they might provide a glimpse of how our ancestors communicated before they had fully evolved into humans.
The origins of human language have long been debated. One prominent theory, first proposed by Charles Darwin, holds that speech evolved from a musical protolanguage, but there are others – for example, that communication was by gesture before it was vocalised. According to a third, “multimodal” approach, gestural and vocal forms of communication evolved in tandem, having different but complementary functions. Vocalisations might have had a coordinating role in social interactions, for example, whereas gestures might have been more referential – for pointing out features of the environment.
Those who support the theory of a musical protolanguage tend to argue that, as hominin brains expanded and they gained control of their vocal cords, calls that were once involuntary expressions of emotion were harnessed into a song that conveyed meaning and gradually acquired combinatorial power, or syntax (though some argue that syntax preceded meaning). Meyer suggests that there may have been a stage of intentional vocalisation before the song: “It’s possible that whistling control developed before control of the vocal cords,” he says.
As language evolution expert Przemysław Żywiczyński of Nicolaus Copernicus University in Toruń, Poland points out, sign languages have emerged spontaneously in communities without speech – as in the case of Nicaraguan Sign Language, which developed in schools for deaf children in the 1980s – but there are no reports of the spontaneous emergence of a whistled language in such a community.
However, Meyer argues that whistled languages illustrate a more fundamental principle: “Whistles are complex enough to transmit the essential aspects of languages, confirming that vocal [cords] are not compulsory for an acoustic use of the language faculty.” So in his view, it’s possible that some form of whistled language – even if different from those in use today – might have preceded spoken language. And, he says, several strands of evidence now support that claim.
A 2018 study by Michel Belyk of the Bloorview Research Institute in Toronto, Canada and colleagues showed, for example, that people are better at imitating simple melodies when they whistle them as opposed to when they sing them. And among nonhuman primates, monkeys and apes have volitional control of their lips and tongues, but monkeys don’t have volitional control of their vocal cords, while some apes have some vocal control.
If Meyer is right, the current diversity of whistled languages – although we may only be seeing a snapshot of it – could reflect how language evolved since that protolanguage.
Human languages are either tonal or non-tonal. In tonal languages, such as Mandarin, a word’s meaning depends on its pitch with respect to the rest of the sentence. The vocal cords generate the pitch, or melody, which the oral articulators – including the lips and tongue – then mould into vowels and consonants, or phonemes. Since whistling doesn’t involve the vocal cords, only the oral articulators, whistlers of a tonal language must therefore choose which to transmit – the melody or the phonemes – and it turns out that they always choose the melody. In non-tonal languages – which include English and most other European languages – they don’t have to make that choice because pitch doesn’t affect meaning, so they only whistle the phonemes. There is also a musical form of whistling in both language types where the whistle follows a song’s lyrics by transposing their pitch. Thus whistled languages today comprise three types: non-tonal, tonal-melodic and musical.
All human whistled languages are endangered, Meyer states, and most are likely to disappear within two generations. There are attempts afoot to revive some of them, for example in the Ossau Valley where Aas is located, but these may not succeed in bucking the broader trend. The languages’ vitality depends on that of the traditional rural practices, with which they are associated, and those practices are also disappearing as roads, mobile phone masts and noise pollution penetrate once-secluded valleys, and young people move out to the cities.
Views expressed are personal