Category Archives: Language

Like Humans, Apes Communicate To Start And End Social Interactions (Language)

When we’re talking to another person, we probably wouldn’t leave without saying goodbye; that would just be impolite. Apes seem to do something similar, researchers report in a study publishing August 11 in the journal iScience, in which they documented apes purposefully using signals to start and then end interactions—a behavior not seen outside of the human species until now. They also found that the social and power dynamics between the interacting apes affected the communication efforts used, which the researchers say mirrors patterns similar to human politeness.

“We were able to launch rockets and land on the moon because we have the ability to share our intentions, which allows us to achieve things so much bigger than a single individual can achieve alone. This ability has been suggested to be at the heart of human nature,” says Raphaela Heesen, a postdoctoral researcher at Durham University in the United Kingdom. Sharing intentions and working together on a common goal leads to a mutual sense of obligation otherwise known as joint commitment—and now, she and her team are seeing evidence in great apes that might challenge the long-held claim that joint commitment is unique to humans.

In previous experiments of joint commitment, human children protested when an experimenter abruptly stopped playing with them. Offering toys or vocalizing, the children tried to re-engage the experimenter in their previously agreed-upon play. After witnessing a similar situation between two bonobos—who were interrupted while grooming but then used gestures to resume the interaction with each other—Heesen and colleagues became curious to learn more about how and when joint commitment first emerged in the human lineage.

This video shows two chimps exiting a play activity. Kume (male) is engaging in chase play with Colebe (male) in the main phase. Colebe, the phase initiator, then stops to run and play chase and turns around, upon which the exit phase starts. The two mutually gaze at each other, Colebe attempts to reinstate the play interaction, but Kume pushes (Gesture) his head, upon which Colebe grabs (Gesture) the hand by which Kume is holding onto Colebe’s head. Kume then drops his hand, the two hold hands for a moment while gazing at each other again, they then perform a head on head (gesture) while facing each other. Colebe then sits down, Kume walks away, Colebe gazes one more time at Kume walking away, and the exit phase stops. No further interaction follows. Credit: Raphaela Heesen and Emilie Genty

But unlike previous scientists, Heesen and her team proposed that joint commitment isn’t solely based on the feeling of obligation between two participants to fulfill a shared promise. Instead, it also involves the process of setting up the agreement and mutually deciding afterward that the agreement has been fulfilled.

That means something as simple as entering a conversational commitment with eye contact and a “hello” and then signaling that a conversation is wrapping up with repeating “okay, sounds good” or a “goodbye” could be an example of this process. So Heesen and her colleagues set out to see if great apes had a similar interaction entry and exit process, which she and her team argued would demonstrate the process of joint commitment.

After analyzing 1,242 interactions within groups of bonobos and chimpanzees in zoos, they found that the apes did in fact frequently gaze at and communicate with each other to start and end interactions. Bonobos exchanged entry signals and mutual gaze prior to playing 90% of the time and chimps 69% of the time. Exit phases were even more common, with 92% of bonobo and 86% of chimpanzee interactions involving exits. The signals included gestures like touching each other, holding hands or butting heads, or gazing at each other, before and after encounters like grooming or play.

The researchers also considered factors like how close the apes were to each other socially or who had more power over the other. Interestingly, the closer bonobos were to each other, the shorter the duration of their entry and exit phases, if they existed after at all. The authors say this pattern is similar to how we, as humans, communicate with others, too. “When you’re interacting with a good friend, you’re less likely to put in a lot of effort in communicating politely,” Heesen says.

This video shows two chimpanzees entering a social grooming activity. Madingo (male) approaches Macourie (female) and both mutually gaze at each other (start of the entry phase). Macourie then uses a series of gestures, first attempting to grab Madingo, then touching his shoulder and back (gestures), and finally, grab-pulling him at his hips (gesture). Macourie then starts grooming him on his shoulder once he is sitting in close proximity. The entry stops with the first grooming movements, upon which the main body starts. Credit: Raphaela Heesen and Emilie Genty

However, the level of friendship and strength of social bonds didn’t seem to affect the chimpanzees’ entries and exits at all. This could be because in comparison to chimps’ despotic power hierarchies, bonobo societies in general are documented to be more egalitarian, with emphasis on friendships and alliances between females and close mother-son relationships.

As for understanding the origin and evolution of joint commitment, this study is another step forward—but Heesen says there’s a still a lot to do. “Behavior doesn’t fossilize. You can’t dig up bones to look at how behavior has evolved. But you can study our closest living relatives: great apes like chimpanzees and bonobos,” says Heesen. “Whether this type of communication is present in other species will also be interesting to study in the future.”

This work was funded by the Swiss National Science Foundation.

iScience, Heesen et al.: “Assessing joint commitment as a process in great apes.” https://www.cell.com/iscience/fulltext/S2589-0042(21)00840-3


Provided by Cell Press

Researchers Explore How Children Learn Language (Language)

Small children learn language at a pace far faster than teenagers or adults. One explanation for this learning advantage comes not from differences between children and adults, but from the differences in the way that people talk to children and adults.

For the first time, a team of researchers developed a method to experimentally evaluate how parents use what they know about their children’s language when they talk to them. They found that parents have extremely precise models of their children’s language knowledge, and use these models to tune the language they use when speaking to them. The results are available in an advance online publication of the journal of Psychological Science.

“We have known for years that parents talk to children differently than to other adults in a lot of ways, for example simplifying their speech, reduplicating words and stretching out vowel sounds,” said Daniel Yurovsky, assistant professor in psychology at Carnegie Mellon University. “This stuff helps young kids get a toehold into language, but we didn’t whether parents change the way they talk as children are acquiring language, giving children language input that is ‘just right’ for learning the next thing.”

Adults tend to speak to children more slowly and at a higher pitch. They also use more exaggerated enunciation, repetition and simplified language structure. Adults also pepper their communication with questions to gauge the child’s comprehension. As the child’s language fluency increases, the sentence structure and complexity used by adults increases.

Yurovsky likens this to the progression a student follows when learning math in school.

“When you go to school, you start with algebra and then take plane geometry before moving onto calculus,” said Yurovsky. “People talk to kids using same kind of structure without thinking about it. They are tracking how much their child knows about language and modifying how they speak so that for children understand them.”

Yurovsky and his team sought to understand exactly how caregivers tune their interactions to match their child’s speech development. The team developed a game where parents helped their children to pick a specific animal from a set of three, a game that toddlers (aged 15 to 23 months) and their parents play routinely in their daily lives. Half of the animals in the matching game were animals that children typically learn before age 2 (e.g. cat, cow), and the other half were animals that are typically learned later (e.g. peacock, leopard).

The researchers asked 41 child-adult pairs to play the game in a naturalistic setting in the laboratory. They measured the differences in how parents talked about animals they thought their children knew as compared to those they thought their children did not know.

“Parents have an incredibly precise knowledge of their child’s language because they have witnessed them grow and learn,” said Yurovsky. “These results show that parents leverage their knowledge of their children’s language development to fine-tune the linguistic information they provide.”

The researchers found that the caregiver used a variety of techniques to convey the ‘unknown’ animal to the child. The most common approach was to use additional descriptors familiar to the child.

“This [research] approach lets us confirm experimentally ideas that we have developed based on observations of how children and parents engage in the home,” said Yurovsky. “We found that parents not only used what they already knew about their children’s language knowledge before the study, but also that if they found out they wrong — their child didn’t actually know ‘leopard’ for example — they changed the way they talked about that animal the next time around.”

The study consisted of 36 experimental trials where each animal appeared as a target at least twice in the game. The participants represented a racial composition similar to the United States (56% white, 27% Black, and 8% Hispanic).

The results reflect a western parenting perspective as well as caregivers with a higher educational background than is representative in the country. The researchers did not independently measure the children’s knowledge of each animal. The results of this study cannot differentiate whether the children learned any new animals while playing the game.

Yurovsky believes the results may have some relevance for researchers working in the field of machine learning.

“These results could help us understand how to think about machine learning language systems,” he said. “Right now we train language models by giving them all of the language data we can get our hands on all at once. But we might do better if we could give them the right data at the right time, keeping it at just the right level of complexity that they are ready for.”

Yurovsky was joined on this project by Ashley Leung at the University of Chicago and Alex Tunkel at The George Washington University School of Medicine and Health Sciences. This project received funds from the James S. McDonnell Foundation.

Featured image credit: gettyimages


Provided by Carnegie Mellon University

Bilingualism As A Natural Therapy For Autistic Children (Language)

An international team led by UNIGE demonstrates that the characteristics of bilingualism allow autistic children to compensate for certain fundamental deficits.

Affecting more than one in a hundred children, autism spectrum disorder is one of the most common neurodevelopmental disorders. It has a particular impact on social interaction, including difficulties in understanding other people’s perspectives, beliefs, desires and emotions, known as ‘theory of mind’. Bilingual families with an autistic child often tend – and are sometimes encouraged – to forego the use of one of the home languages, so as not to further complicate the development of their child’s communicative skills. A researcher from the University of Geneva (UNIGE, Switzerland), in collaboration with the Universities of Thessaly (Greece) and Cambridge (Great-Britain), has shown that bilingualism allows autistic children to partially compensate for deficits in theory of mind and executive functions, which are at the root of many of their challenges. These results can be read in the journal Autism Research.

Diagnosed in early childhood, autism spectrum disorder has a particular impact on a child’s social and communicative abilities. “It is a spectrum, which is why the intensity of the symptoms varies greatly”, explains Stéphanie Durrleman, a researcher in the Department of Linguistics at the UNIGE Faculty of Arts and co-author of the study. “But what children with autism have in common is that they have difficulties putting themselves in the place of their interlocutor, focusing on the latter’s point of view and thus disengaging their attention from their own perspective.” Autism therefore affects not only everything that has to do with the theory of mind – understanding the beliefs, emotions, intentions and desires of others – but also often executive functions, including attentional abilities.


Could benefits of bilingualism be applied to children with autism?

Studies on bilingualism have shown that children without autism who use several languages have increased theory of mind and executive function skills compared to monolingual children. “Bilingualism therefore seems to bring benefits precisely where the autistic child has difficulties”, says Stéphanie Durrleman. “We therefore wondered whether bilingual autistic children manage to mitigate the difficulties of their neurodevelopmental disorder by using two languages every day.”

To test this hypothesis, the researchers from the universities of Geneva, Thessaly and Cambridge followed 103 autistic children aged 6 to 15, 43 of whom were bilingual. “In order to observe the real effects of bilingualism on their socio-communicative skills, we grouped them according to their age, gender and the intensity of their autistic disorder”, explains Eleni Peristeri, researcher at the Faculty of Medicine of the University of Thessaly and co-author of the study. The participants then performed various tasks to assess their theory of mind and executive function skills. The bilinguals quickly distinguished themselves by scoring higher than their monolingual peers. “On tasks relating to theory of mind, i.e. their ability to understand another person’s behaviour by putting themselves in their place, the bilingual children gave 76% correct answers, compared with 57% for the monolingual children”, notes the Greek researcher. The same is true for executive functions: the score for correct responses in bilinguals is twice that of monolinguals. But why are the differences so clear?

“Bilingualism requires the child to work first on skills directly related to theory of mind, i.e. he or she must constantly be concerned with the knowledge of others: Does the person I am speaking to speak Greek or Albanian? In what language should I talk to him or her? Then, in a second phase, the child uses his executive functions by focusing his attention on one language, while inhibiting the second”, explains Eleni Peristeri. This is a real gymnastics for the brain, which acts precisely on the deficits linked to the autistic disorder.


Encouraging bilingualism instead of giving it up

“From our evaluations, we can clearly see that bilingualism is very beneficial for children with autism spectrum disorders”, enthuses Stéphanie Durrleman. In order to certify that the socio-economic level in which the participants grew up did not play a role in the results, this was also recorded and it turned out that the bilingual children were mostly in a lower socio-economic environment than the monolinguals. “We can therefore affirm that benefits in theory of mind and executive functions emerge in bilinguals, even when there is a socio-economic disadvantage”, says the Geneva researcher.

These findings are important for the care of children diagnosed with autism. “Indeed, as this neurodevelopmental disorder often affects language acquisition, bilingual families tend to give up the use of one of the two languages, so as not to exacerbate the learning process. However, it is now clear that far from putting autistic children in difficulty bilingualism can, on the contrary, help these children to overcome several aspects of their disorder, serving as a kind of natural therapy”, concludes Stéphanie Durrleman.

Featured image credit: Garner


Reference: Peristeri, E., Baldimtsi, E., Vogelzang, M., Tsimpli, I. M., & Durrleman, S. (2021). The cognitive benefits of bilingualism in autism spectrum disorder: Is theory of mind boosted and by which underlying factors? Autism Research, 1– 15. https://doi.org/10.1002/aur.2542


Provided by University of Geneve

‘Sticky’ Speech And Other Evocative Words May Improve Language (Language)

New study finds that iconicity in parents’ speech helps children learn new words

Some words sound like what they mean. For example, “slurp” sounds like the noise we make when we drink from a cup, and “teeny” sounds like something that is very small. This resemblance between how a word sounds and what it means is known as iconicity.

In her lab at the University of Miami, Lynn Perry, an associate professor in the College of Arts and Sciences Department of Psychology, previously found that children tend to learn words higher in iconicity earlier in development then they do words lower in iconicity. She also found that adults tend to use more iconic words when they speak to children than when they speak to other adults.

“That got us curious about why,” said Stephanie Custode, a doctoral student in psychology, who worked with Perry to answer questions posed by her prior work. “Does iconicity play a causal role in children’s language development, helping them learn new words, eventually even those words that have non-iconic, or arbitrary, sound-meaning associations?” 

For their new study, published in the journal Cognitive Science, the researchers explored whether parents’ who used iconic words as they played with novel objects with children between 1 and 2 helped them learn those objects’ names. The objects were novel toys and foods that the researchers made and gave names to, like the word “blicket” to describe a clay toy with a made-up shape. They found that when parents named a novel object, their children were more likely to remember those novel names later if the parent also used highly iconic words in the same sentence. This was true both for parents speaking English and Spanish.

“Consider when a parent teaches their child about ‘cats’ by talking about how they ‘meow,’ or about a sweater by talking about how ‘fuzzy’ it is, or about ‘honey’ by talking about how sticky it is,” Perry said. “The resemblance between the sound of a word like ‘sticky’ and the texture of the honey helps the child pay attention to that property. If the parent also says ‘honey’ while describing its stickiness, the child can form a stronger memory of that new word and its meaning, because they’re paying attention to its important properties—its sticky texture in this case.”  

The researchers found it was beneficial for parents to use iconic language specifically when they introduced a novel name. “If a parent talks about stickiness without saying the name ‘honey’, there’s no new name to associate with that sticky texture, and if a parent names the honey but talks about it being yellow, a word that doesn’t particularly sound like its meaning, the child might pay less attention to the honey and forget about it. In both cases, the child wouldn’t learn the new word ‘honey’,” said Perry.

From these findings, the researchers concluded that iconicity could be an important cue that parents and other caregivers can use to facilitate word learning.

Next the researchers plan to investigate whether using more iconic words can help children with language delays learn new words. They also are interested in studying how parents talk to children changes over time and whether they decrease their use of iconic language as they recognize that their child is becoming a stronger word learner.

The study, “What Is the Buzz About Iconicity? How Iconicity in Caregiver Speech Supports Children’s Word Learning,” is now available in Cognitive Science.


Provided by University of Miami

Ancestors May Have Created ‘Iconic’ Sounds as Bridge to First Languages (Language)

The ‘missing link’ that helped our ancestors to begin communicating with each other through language may have been iconic sounds, rather than charades-like gestures – giving rise to the unique human power to coin new words describing the world around us, a new study reveals.

It was widely believed that, in order to get the first languages off the ground, our ancestors first needed a way to create novel signals that could be understood by others, relying on visual signs whose form directly resembled the intended meaning.

However, an international research team, led by experts from the University of Birmingham and the Leibniz-Centre General Linguistics (ZAS), Berlin, have discovered that iconic vocalisations can convey a much wider range of meanings more accurately than previously supposed. Listen to a selection of sounds: ‘Cut‘; ‘Tiger‘; ‘Water‘; and ‘Good‘.

The researchers tested whether people from different linguistic backgrounds could understand novel vocalizations for 30 different meanings common across languages and which might have been relevant in early language evolution.

These meanings spanned animate entities, including humans and animals (child, man, woman, tiger, snake, deer), inanimate entities (knife, fire, rock, water, meat, fruit), actions (gather, cook, hide, cut, hunt, eat, sleep), properties (dull, sharp, big, small, good, bad), quantifiers (one, many) and demonstratives (this, that).

The team published their findings in Scientific Reports, highlighting that the vocalizations produced by English speakers could be understood by listeners from a diverse range of cultural and linguistic backgrounds. Participants included speakers of 28 languages from 12 language families, including groups from oral cultures such as speakers of Palikúr living in the Amazon forest and speakers of Daakie on the South Pacific island of Vanuatu. Listeners from each language were more accurate than chance at guessing the intended referent of the vocalizations for each of the meanings tested.

Co-author Dr Marcus Perlman, Lecturer in English Language and Linguistics at the University of Birmingham, commented: “Our study fills in a crucial piece of the puzzle of language evolution, suggesting the possibility that all languages – spoken as well as signed – may have iconic origins.

“The ability to use iconicity to create universally understandable vocalisations may underpin the vast semantic breadth of spoken languages, playing a role similar to representational gestures in the formation of signed languages.”

Co-author Dr Bodo Winter, Senior Lecturer in Cognitive Linguistics at the University of Birmingham, commented: “Our findings challenge the often-cited idea that vocalisations have limited potential for iconic representation, demonstrating that in the absence of words people can use vocalizations to communicate a variety of meanings – serving effectively for cross-cultural communication when people lack a common language.”

An online experiment allowed researchers to test whether a large number of diverse participants around the world were able to understand the vocalisations. A field experiment using 12 easy-to-picture meanings, allowed them to test whether participants living in predominantly oral societies were also able to understand the vocalisations.

They found that some meanings were consistently guessed more accurately than others. In the online experiment, for example, accuracy ranged from 98.6% for the action ‘sleep’ to 34.5% for the demonstrative ‘that’. Participants were best with the meanings ‘sleep’, ‘eat’, ‘child’, ‘tiger’, and ‘water’, and worst with ‘that’, ‘gather’, ‘dull’, ‘sharp’ and ‘knife’.

The researchers highlight that while their findings provide evidence for the potential of iconic vocalisations to figure in the creation of original spoken words, they do not detract from the hypothesis that iconic gestures also played a critical role in the evolution of human communication, as they are known to play in the modern emergence of signed languages.

Featured image: ‘Tiger’ – one of the concepts vocalised by scientists © University of Birmingham


Notes to editors:

  • ‘Novel Vocalizations are Understood across Cultures’ – Aleksandra Ćwiek, Susanne Fuchs, Christoph Draxler, Eva Liina Asu, Dan Dediu, Katri Hiovain, Shigeto Kawahara, Sofia Koutalidis, Manfred Krifka, Pärtel Lippus, Gary Lupyan, Grace E. Oh, Jing Paul, Caterina Petrone, Rachid Ridouane, Sabine Reiter, Nathalie Schümchen, Ádám Szalontai, Özlem Ünal-Logacev, Jochen Zeller, Bodo Winter, and Marcus Perlman is published in Scientific Reports.

Provided by University of Birmingham

Foreign Language Learners Should Be Exposed to Slang In the Classroom And Here’s Why…. (Language)

Experts say English slang and regional dialect should not be banned from classrooms but when you’re getting to grips with a second language how helpful is it to learn non-standard lingo?

Very, says Sascha Stollhans, of the Department of Languages and Cultures at Lancaster University, who argues that standardised language norms are artificial and language learners should learn about all aspects of language, even the controversial ones.

In his policy paper, just published in the Languages, Society & Policy Journal, he says:

  • There are concerns among professionals that introducing learners to ‘non-standard’ language could lead to ambiguity and confusion and that students might be penalised for using it in assessments.
  • Linguistic variation is a rich area of study that can appeal to language learners and have a positive impact on motivation.  
  • Attitudes to language norms and variation in language teaching vary widely, and current textbooks deal with language variation in very different ways

 “Language learners will need to be able to understand slang and dialect when mixing with so-called ‘native’ speakers – which is easier than ever in this digital age – just take a look at the language used on Twitter,” says Mr Stollhans, a Senior Teaching Associate in German Studies at Lancaster.

“More than that, in the UK, where school-based language learning has been in crisis mode for a while now, learning more about the varied ways in which ‘native speakers’ in different places and contexts communicate could be just the way to get students motivated and interested.

“This process can be extremely creative and tell us a lot about other cultures. It can also be an important step towards a more diverse and inclusive curriculum. After all, language norms are often political and historical, and there are a variety of speakers of a language.”

The paper makes concrete recommendations for policy-makers, publishers, authors of learning materials, examination boards and teacher training providers.

It urges:

  • Curriculum leaders and teachers in the UK to make it their mission to enlighten learners about the rich and dynamic forms of variation a language entails when learning their first language – the first step to learning the complexity of other languages
  • Examination boards to accept the use of non-standard variations in tests and examinations, in appropriate contexts
  • Teacher training to include appropriate linguistics elements to sensitise teachers to issues around variation and equip them with the means to be able to make informed decisions about the inclusion of language varieties in their teaching. This is something Mr Stollhans has been campaigning for with the national “Linguistics in Modern Foreign Languages” network.

The policy paper is part of a special collection of policy papers on “Language inequality in education, law and citizenship” that follows on from a meeting which brought together academics with practitioners – teachers, examiners, dictionary-makers, speech therapists, legislators, translators, lobbyists, policy-makers, and others – to examine how assumptions and beliefs about correct, acceptable or standard languages impact on everyday life in a multilingual world.

The meeting, for which Mr Stollhans was invited to chair the education panel, was part of the Arts and Humanities Research Council-funded MEITS project.

Featured image: Computer screen showing the word “hello” in different languages © Lancaster University


Provided by Lancaster University

Actively Speaking Two Languages Protects Against Cognitive Decline (Language / Neuroscience)

Researchers conclude that regularly speaking two languages contributes to cognitive reserve and delays the onset of the symptoms associated with cognitive decline and dementia.

In addition to enabling us to communicate with others, languages are our instrument for conveying our thoughts, identity, knowledge, and how we see and understand the world. Having a command of more than one enriches us and offers a doorway to other cultures, as discovered by a team of researchers led by scientists at the Open University of Catalonia (UOC) and Pompeu Fabra University (UPF). Using languages actively provides neurological benefits and protects us against cognitive decline associated with ageing.

In a study published in the journal Neuropsychologia, the researchers conclude that regularly speaking two languages -and having done so throughout one’s life- contributes to cognitive reserve and delays the onset of the symptoms associated with cognitive decline and dementia.

“We have seen that the prevalence of dementia in countries where more than one language is spoken is 50% lower than in regions where the population uses only language to communicate”, asserts researcher Marco Calabria, a member of the Speech Production and Bilingualism research group at UPF and of the Cognitive NeuroLab at the UOC, and professor of Health Sciences Studies, also at the UOC.

Previous work had already found that the use of two or more languages throughout life could be a key factor in increasing cognitive reserve and delaying the onset of dementia; also, that it entailed advantages of memory and executive functions.

“We wanted to find out about the mechanism whereby bilingualism contributes to cognitive reserve with regard to mild cognitive impairment and Alzheimer’s, and if there were differences regarding the benefit it confers between the varying degrees of bilingualism, not only between monolingual and bilingual speakers”, points out Calabria, who led the study.

Thus, and unlike other studies, the researchers defined a scale of bilingualism: from people who speak one language but are exposed, passively, to another, to individuals who have an excellent command of both and use them interchangeably in their daily lives. To construct this scale, they took several variables into account such as the age of acquisition of the second language, the use made of each, or whether they were used alternatively in the same context, among others.

The researchers focused on the population of Barcelona, where there is strong variability in the use of Catalan and Spanish, with some districts that are predominantly Catalan-speaking and others where Spanish is mainly spoken. “We wanted to make use of this variability and, instead of comparing monolingual and bilingual speakers, we looked at whether within Barcelona, where everyone is bilingual to varying degrees, there was a degree of bilingualism that presented neuroprotective benefits”, Calabria explains.

Bilingualism and Alzheimer’s

At four hospitals in the Barcelona and metropolitan area, they recruited 63 healthy individuals, 135 patients with mild cognitive impairment, such as memory loss, and 68 people with Alzheimer’s, the most prevalent form of dementia. They recorded their proficiency in Catalan and Spanish using a questionnaire and established the degree of bilingualism of each subject. They then correlated this degree with the age at which the subjects’ neurological diagnosis was made and the onset of symptoms.

To better understand the origin of the cognitive advantage, they asked the participants to perform various cognitive tasks, focusing primarily on the executive control system, since the previous studies had suggested that this was the source of the advantage. In all, participants performed five tasks over two sessions, including memory and cognitive control tests.

“We saw that people with a higher degree of bilingualism were given a diagnosis of mild cognitive impairment later than people who were passively bilingual”, states Calabria, for whom, probably, speaking two languages and often changing from one to the other is life-long brain training. According to the researcher, this linguistic gymnastics is related to other cognitive functions such as executive control, which is triggered when we perform several actions simultaneously, such as when driving, to help filter relevant information.

The brain’s executive control system is related with the control system of the two languages: it must alternate them, make the brain focus on one and then on the other so as not to cause one language to intrude in the other when speaking.

“This system, in the context of neurodegenerative diseases, might offset the symptoms. So, when something does not work properly as a result of the disease, the brain has efficient alternative systems to solve it thanks to being bilingual”, Calabria states, who then continues: “we have seen that the more you use two languages and the better language skills you have, the greater the neuroprotective advantage. Active bilingualism is, in fact, an important predictor of the delay in the onset of the symptoms of mild cognitive impairment, a preclinical phase of Alzheimer’s disease, because it contributes to cognitive reserve”.

Now, the researchers wish to verify whether bilingualism is also beneficial for other diseases, such as Parkinson’s or Huntington’s disease.

References : Marco Calabria, Mireia Hernández, Gabriele Cattaneo, Anna Suades, Mariona Serra, Montserrat Juncadella, Ramón Reñé, Isabel Sala, Alberto Lleó, Jordi Ortiz-Gil, Lidia Ugas, Asunción Ávila, Isabel Gómez Ruiz, César Ávila, Albert Costa (2020) “Active bilingualism delays the onset of mild cognitive impairment”, Neuropsychologia, Vol. 146, 107528, ISSN 0028-3932, https://www.sciencedirect.com/science/article/abs/pii/S0028393220302013?via%3Dihub
https://doi.org/10.1016/j.neuropsychologia.2020.107528

Provided by University of Pompeu Fabra- Bercelona

RUDN University linguists: Vocabulary size affects ability to differentiate foreign language vowels (Language)

A team of linguists from RUDN University established that a person’s ability to accurately differentiate between vowel sounds of a foreign language correlates with the size of their vocabulary in said language. The results of the study were published in the Language Learning and Development journal.

A team of linguists from RUDN University established that a person’s ability to accurately differentiate between vowel sounds of a foreign language correlates with the size of their vocabulary in said language. ©RUDN University

In linguistics, a second language (or L2) is any language that a person acquires after their mother tongue. The first language, or L1, affects the use of the second one: for example if L1 does not have many variations of vowel pronunciation, and L2 is rich in them, it would be more difficult for a speaker to perceive and reproduce the vowel sounds of the second language. It can be hard for a native Spanish speaker to feel the difference between the sounds [i] and [i:] in English because in Spanish only [i] is used. A team of linguists from RUDN University studied the effect of L2 vocabulary size on the speakers’ ability to tell the difference between L2 vowel sounds.

“For our study, we chose Russian and English languages, because the former contains only five vowel sounds while the latter has a complex system of vowels. Therefore, we assumed that it would be difficult for Russian speakers to differentiate between English vowels,” said Georgios Georgiou, Ph.D., a postdoc, and a researcher at the Department of General and Russian Linguistics, Philological Faculty of RUDN.

To describe vowel sounds, scientists use a set of parameters called formants. They indicate vibration frequency and are measured in hertz. Based on the values of formants, a linguist can understand how a human tongue moves when a vowel sound is pronounced. For example, the first formant shows the height of the tongue in the mouth and the second indicates its proximity to teeth. Russian language (L1) has five vowel phonemes ( [i, e, a, o, u]), and English (L2)–11. The team placed L1 and L2 vowel sounds on a reference plane and marked the values of the first and the second formant on the axes. When a set of points was created on the plane, the team measured the distance between them and used it as an indicator of differences between the sounds. Based on the measured distances, the team identified two English vowel pairs, [i:]/[I] and [e]/[??], that were the easiest to differentiate from one another. Then, an experiment was conducted to confirm this hypothesis and to study the correlation between differentiation abilities and vocabulary size.

The team chose 28 Russian-speaking students from 17 to 19 years of age. All of them spent eight years on average studying English (RP) at public or language schools and never lived in an English-speaking country for more than a month. Their English proficiency level was B2, and their proficiency in any other foreign language did not exceed A2. The participants were divided into two groups with small (around 5,500 words) and large (7,150 words) vocabulary based on the results of an online test with multiple-choice questions. In the test, the participants were asked to choose the correct definitions of English words, which gave the researchers an understanding of their average vocabulary size. After that, both groups took a psychoacoustic test that consisted of two parts. First, the students listened to English words and matched the vowels in them with the vowels of their native language. Then, they listened to groups of three words and had to identify whether the vowel in the second word matched that in the first or the third word.

It turned out that one’s ability to associate L2 vowels with the vowels of one’s mother tongue does not depend on their vocabulary. The participants with a small vocabulary managed to match Russian and English sounds in 95% of cases and students with a large vocabulary–in 94.7% of cases. However, the vocabulary size affected the participants’ ability to tell the difference between vowels in similar words. For example, students with a large vocabulary differentiated between [?] and [i?] in 71% of cases, and the participants from the other group–only in 59%.

“In the future, we plan to study the effect of additional external factors, such as fluency in another foreign language, on the differentiation of L2 sounds,” added Georgios Georgiou.

References: http://dx.doi.org/10.1080/15475441.2020.1814779

Provided by RUDN University

Cognitive Elements of Language Have Existed for 40 Million Years (Language)

Humans are not the only beings that can identify rules in complex language-like constructions – monkeys and great apes can do so, too, a study at the University of Zurich has shown. Researchers at the Department of Comparative Language Science of UZH used a series of experiments based on an ‘artificial grammar’ to conclude that this ability can be traced back to our ancient primate ancestors.

The chimpanzees learned that certain sounds were always followed by other specific sounds, even if they were sometimes separated by other acoustic signals. (Image: Istock.com/Juanmonino)

Language is one of the most powerful tools available to humankind, as it enables us to share information, culture, views and technology. “Research into language evolution is thus crucial if we want to understand what it means to be human,” says Stuart Watson, postdoctoral researcher at the Department of Comparative Language Science of the University of Zurich. Until now, however, little research has been conducted about how this unique communication system came to be.

Identifying connections between words
An international team led by Professor Simon Townsend at the Department of Comparative Language Science of the University of Zurich has now shed new light on the evolutionary origins of language. Their study examines one of the most important cognitive elements needed for language processing – that is, the ability to understand the relationship between the words in a phrase, even if they are separated by other parts of the phrase, known as a “non-adjacent dependency”. For example, we know that in the sentence “the dog that bit the cat ran away”, it is the dog who ran away, not the cat, even though there are several other words in between the two phrases. A comparison between apes, monkeys and and humans has now shown that the ability to identify such non-adjacent dependencies is likely to have developed as far back as 40 million years ago.

Acoustic signals instead of words

The researchers used a novel approach in their experiments: They invented an artificial grammar, where sequences are formed by combining different sounds rather than words. This enabled the researchers to compare the ability of three different species of primates to process non-adjacent dependencies, even though they do not share the same communication system. The experiments were carried out with common marmosets – a monkey native to Brazil – at the University of Zurich, chimpanzees (University of Texas) and humans (Osnabrück University).

Mistakes followed by telltale looks
First, the researchers taught their test subjects to understand the artificial grammar in several practice sessions. The subjects learned that certain sounds were always followed by other specific sounds (e.g. sound ‘B’ always follows sound ‘A’), even if they were sometimes separated by other acoustic signals (e.g. ‘A’ and ‘B’ are separated by ‘X’). This simulates a pattern in human language, where, for example, we expect a noun (e.g. “dog”) to be followed by a verb (e.g. “ran away”), regardless of any other phrasal parts in between (e.g. “that bit the cat”).

In the actual experiments that followed, the researchers played sound combinations that violated the previously learned rules. In these cases, the common marmosets and chimpanzees responded with an observable change of behavior; they looked at the loudspeaker emitting the sounds for about twice as long as they did towards familiar combinations of sounds. For the researchers, this was an indication of surprise in the animals caused by noticing a ‘grammatical error’. The human test subjects were asked directly whether they believed the sound sequences were correct or wrong.

Common origin of language

“The results show that all three species share the ability to process non-adjacent dependencies. It is therefore likely that this ability is widespread among primates,” says Townsend. “This suggests that this crucial element of language already existed in our most recent common ancestors with these species.” Since marmosets branched off from humanity’s ancestors around 40 million years ago, this crucial cognitive skill thus developed many million years before human language evolved.

References: Stuart K. Watson, Judith M. Burkart, Steven J. Schapiro, Susan P. Lambeth, Jutta L. Mueller and Simon W. Townsend. Non-adjacent dependency processing in monkeys, apes and humans. Science Advances, 21, vol. 6, no. 43. October 2020. DOI: 10.1126/sciadv.abb0725

Provided by University of Zurich