Tag Archives: #language

‘Sticky’ Speech And Other Evocative Words May Improve Language (Language)

New study finds that iconicity in parents’ speech helps children learn new words

Some words sound like what they mean. For example, “slurp” sounds like the noise we make when we drink from a cup, and “teeny” sounds like something that is very small. This resemblance between how a word sounds and what it means is known as iconicity.

In her lab at the University of Miami, Lynn Perry, an associate professor in the College of Arts and Sciences Department of Psychology, previously found that children tend to learn words higher in iconicity earlier in development then they do words lower in iconicity. She also found that adults tend to use more iconic words when they speak to children than when they speak to other adults.

“That got us curious about why,” said Stephanie Custode, a doctoral student in psychology, who worked with Perry to answer questions posed by her prior work. “Does iconicity play a causal role in children’s language development, helping them learn new words, eventually even those words that have non-iconic, or arbitrary, sound-meaning associations?” 

For their new study, published in the journal Cognitive Science, the researchers explored whether parents’ who used iconic words as they played with novel objects with children between 1 and 2 helped them learn those objects’ names. The objects were novel toys and foods that the researchers made and gave names to, like the word “blicket” to describe a clay toy with a made-up shape. They found that when parents named a novel object, their children were more likely to remember those novel names later if the parent also used highly iconic words in the same sentence. This was true both for parents speaking English and Spanish.

“Consider when a parent teaches their child about ‘cats’ by talking about how they ‘meow,’ or about a sweater by talking about how ‘fuzzy’ it is, or about ‘honey’ by talking about how sticky it is,” Perry said. “The resemblance between the sound of a word like ‘sticky’ and the texture of the honey helps the child pay attention to that property. If the parent also says ‘honey’ while describing its stickiness, the child can form a stronger memory of that new word and its meaning, because they’re paying attention to its important properties—its sticky texture in this case.”  

The researchers found it was beneficial for parents to use iconic language specifically when they introduced a novel name. “If a parent talks about stickiness without saying the name ‘honey’, there’s no new name to associate with that sticky texture, and if a parent names the honey but talks about it being yellow, a word that doesn’t particularly sound like its meaning, the child might pay less attention to the honey and forget about it. In both cases, the child wouldn’t learn the new word ‘honey’,” said Perry.

From these findings, the researchers concluded that iconicity could be an important cue that parents and other caregivers can use to facilitate word learning.

Next the researchers plan to investigate whether using more iconic words can help children with language delays learn new words. They also are interested in studying how parents talk to children changes over time and whether they decrease their use of iconic language as they recognize that their child is becoming a stronger word learner.

The study, “What Is the Buzz About Iconicity? How Iconicity in Caregiver Speech Supports Children’s Word Learning,” is now available in Cognitive Science.


Provided by University of Miami

Ancestors May Have Created ‘Iconic’ Sounds as Bridge to First Languages (Language)

The ‘missing link’ that helped our ancestors to begin communicating with each other through language may have been iconic sounds, rather than charades-like gestures – giving rise to the unique human power to coin new words describing the world around us, a new study reveals.

It was widely believed that, in order to get the first languages off the ground, our ancestors first needed a way to create novel signals that could be understood by others, relying on visual signs whose form directly resembled the intended meaning.

However, an international research team, led by experts from the University of Birmingham and the Leibniz-Centre General Linguistics (ZAS), Berlin, have discovered that iconic vocalisations can convey a much wider range of meanings more accurately than previously supposed. Listen to a selection of sounds: ‘Cut‘; ‘Tiger‘; ‘Water‘; and ‘Good‘.

The researchers tested whether people from different linguistic backgrounds could understand novel vocalizations for 30 different meanings common across languages and which might have been relevant in early language evolution.

These meanings spanned animate entities, including humans and animals (child, man, woman, tiger, snake, deer), inanimate entities (knife, fire, rock, water, meat, fruit), actions (gather, cook, hide, cut, hunt, eat, sleep), properties (dull, sharp, big, small, good, bad), quantifiers (one, many) and demonstratives (this, that).

The team published their findings in Scientific Reports, highlighting that the vocalizations produced by English speakers could be understood by listeners from a diverse range of cultural and linguistic backgrounds. Participants included speakers of 28 languages from 12 language families, including groups from oral cultures such as speakers of Palikúr living in the Amazon forest and speakers of Daakie on the South Pacific island of Vanuatu. Listeners from each language were more accurate than chance at guessing the intended referent of the vocalizations for each of the meanings tested.

Co-author Dr Marcus Perlman, Lecturer in English Language and Linguistics at the University of Birmingham, commented: “Our study fills in a crucial piece of the puzzle of language evolution, suggesting the possibility that all languages – spoken as well as signed – may have iconic origins.

“The ability to use iconicity to create universally understandable vocalisations may underpin the vast semantic breadth of spoken languages, playing a role similar to representational gestures in the formation of signed languages.”

Co-author Dr Bodo Winter, Senior Lecturer in Cognitive Linguistics at the University of Birmingham, commented: “Our findings challenge the often-cited idea that vocalisations have limited potential for iconic representation, demonstrating that in the absence of words people can use vocalizations to communicate a variety of meanings – serving effectively for cross-cultural communication when people lack a common language.”

An online experiment allowed researchers to test whether a large number of diverse participants around the world were able to understand the vocalisations. A field experiment using 12 easy-to-picture meanings, allowed them to test whether participants living in predominantly oral societies were also able to understand the vocalisations.

They found that some meanings were consistently guessed more accurately than others. In the online experiment, for example, accuracy ranged from 98.6% for the action ‘sleep’ to 34.5% for the demonstrative ‘that’. Participants were best with the meanings ‘sleep’, ‘eat’, ‘child’, ‘tiger’, and ‘water’, and worst with ‘that’, ‘gather’, ‘dull’, ‘sharp’ and ‘knife’.

The researchers highlight that while their findings provide evidence for the potential of iconic vocalisations to figure in the creation of original spoken words, they do not detract from the hypothesis that iconic gestures also played a critical role in the evolution of human communication, as they are known to play in the modern emergence of signed languages.

Featured image: ‘Tiger’ – one of the concepts vocalised by scientists © University of Birmingham


Notes to editors:

  • ‘Novel Vocalizations are Understood across Cultures’ – Aleksandra Ćwiek, Susanne Fuchs, Christoph Draxler, Eva Liina Asu, Dan Dediu, Katri Hiovain, Shigeto Kawahara, Sofia Koutalidis, Manfred Krifka, Pärtel Lippus, Gary Lupyan, Grace E. Oh, Jing Paul, Caterina Petrone, Rachid Ridouane, Sabine Reiter, Nathalie Schümchen, Ádám Szalontai, Özlem Ünal-Logacev, Jochen Zeller, Bodo Winter, and Marcus Perlman is published in Scientific Reports.

Provided by University of Birmingham

Language is More Than Speaking: How the Brain Processes Sign Language (Neuroscience)

Over 70 million deaf people around the world use one of more than 200 different sign languages as their preferred form of communication. Although they access similar structures in the brain as spoken languages, it has been difficult to identify the brain regions that process both forms of language equally. Scientists at the Max Planck Institute for Human Cognitive and Brain Sciences (MPI CBS) have now discovered in a meta-analysis that Broca’s area in the left hemisphere of the brain, which has already been shown to be the central hub for spoken languages, is also the crucial brain region for sign languages. This is where the grammar and meaning of language are processed, regardless of whether it is spoken or signed language. This shows that our brain is generally specialized in processing linguistic information. Whether this information is spoken or signed seems to be of secondary importance.

The ability to speak is one of the essential characteristics that distinguishes humans from other animals. Many people would probably intuitively equate speech and language. However, cognitive science research on sign languages since the 1960s paints a different picture: Today it is clear, sign languages are fully autonomous languages and have a complex organization on several linguistic levels such as grammar and meaning. Previous studies on the processing of sign language in the human brain had already found some similarities and also differences between sign languages and spoken languages. Until now, however, it has been difficult to derive a consistent picture of how both forms of language are processed in the brain.

Researchers at the MPI CBS now wanted to know which brain regions are actually involved in the processing of sign language across different studies – and how large the overlap is with brain regions that hearing people use for spoken language processing. In a meta-study recently published in the journal Human Brain Mapping, they pooled data from sign language processing experiments conducted around the world. “A meta-study gives us the opportunity to get an overall picture of the neural basis of sign language. So, for the first time, we were able to statistically and robustly identify the brain regions that were involved in sign language processing across all studies,” explains Emiliano Zaccarella, last author of the paper and group leader in the Department of Neuropsychology at the MPI CBS.

The researchers found that especially the so-called Broca’s area in the frontal brain of the left hemisphere is one of the regions that was involved in the processing of sign language in almost every study evaluated. This brain region has long been known to play a central role in spoken language, where it is used for grammar and meaning. In order to better classify their results from the current meta-study, the scientists compared their findings with a database containing several thousand studies with brain scans.

The Leipzig-based researchers were indeed able to confirm that there is an overlap between spoken and signed language in Broca’s area. They also succeeded in showing the role played by the right frontal brain – the counterpart to Broca’s area on the left side of the brain. This also appeared repeatedly in many of the sign language studies evaluated, because it processes non-linguistic aspects such as spatial or social information of its counterpart. This means that movements of the hands, face and body – of which signs consist – are in principle perceived similarly by deaf and hearing people. Only in the case of deaf people, however, do they additionally activate the language network in the left hemisphere of the brain, including Broca’s area. They therefore perceive the gestures as gestures with linguistic content – instead of as pure movement sequences, as would be the case with hearing people.

The results demonstrate that Broca’s area in the left hemisphere is a central node in the language network of the human brain. Depending on whether people use language in the form of signs, sounds or writing, it works together with other networks. Broca’s area thus processes not only spoken and written language, as has been known up to now, but also abstract linguistic information in any form of language in general. “The brain is therefore specialized in language per se, not in speaking,” explains Patrick C. Trettenbrein, first author of the publication and doctoral student at the MPI CBS. In a follow-up study, the research team now aims to find out whether the different parts of Broca’s area are also specialized in either the meaning or the grammar of sign language in deaf people, similar to hearing people.

Featured image: Our brain is generally specialized in processing linguistic information. Whether this information is spoken or signed seems to be of secondary importance. © shutterstock


Reference: Trettenbrein, P. C., Papitto, G., Friederici, A. D., & Zaccarella, E. (2020), “The functional neuroanatomy of language without speech: An ALE meta-analysis of sign language”, Human Brain Mapping, 42(3), 699–712. https://onlinelibrary.wiley.com/doi/full/10.1002/hbm.25254 https://doi.org/10.1002/hbm.25254


Provided by Max Planck Institute for Human Cognitive and Brain Sciences

Brain Activity During Speaking Varies Between Simple & Complex Grammatical Forms (Neuroscience)

A study shows that brain activity during speaking varies between simple and complex grammatical forms

Some languages require less neural activity than others. But these are not necessarily the ones we would imagine. In a study published today in the journal PLOS Biology, researchers at the University of Zurich have shown that languages that are often considered “easy” actually require an enormous amount of work from our brains.

Speaking is something that comes across as an effortless process, almost working by itself. Our brain, however, has a lot of work to do when we construct a sentence. “In addition, languages differ in myriad ways and this also means that there are differences in how we plan what we want to say in different languages,” says Balthasar Bickel, senior author of the study and a professor at the University of Zurich. 

And if some languages seem easier, it is because they make fewer distinctions in their grammar. While English always uses the (e.g., in “The tree is tall” and “Snow covers the tree”), German makes a distinction between der (subject) and den (object) (e.g., in “Der Baum ist groß” and “Schnee bedeckt den Baum”).

Analysing the brain just before speech

In order to do this, researchers at the University of Zurich, in collaboration with international colleagues, measured the brain activity of Hindi speakers while they described different images. This is the first time that the brain processes during the planning of sentences before speaking have been studied with high temporal resolution. “Until now, similar methods have only been used for planning single words, but not for complete sentences,” explains Sebastian Sauppe, lead author of the study.

An ending with many possibilities

Researchers have discovered that although a language may seem “easier” to us at first glance, it actually requires more work from our neurons. They found that having fewer grammatical distinctions makes planning particularly demanding for the brain and requires more neural activity. The likely reason for this is that having fewer distinctions means keeping more choices open for speakers for how to continue a sentence. 

“This has, however, a crucial advantage for speakers: languages with fewer distinctions allow speakers to commit to the whole sentence only late in the planning process”, adds Sebastian Sauppe. This finding contributes to explaining why languages with fewer distinctions in their grammar are found more often among the world’s languages, which had been shown by an earlier study of the same research group.

The research is part of the NCCR Evolving Language, a new national research centre which has set itself the goal of unraveling the biological underpinnings of language, its evolutionary past and the challenges imposed by novel technologies.

Featured image: Simple syntax, hard to plan ©NCCR


Reference: Sebastian Sauppe, Kamal K. Choudhary, Nathalie Giroud, Damián E. Blasi, Elisabeth Norcliffe, Shikha Bhattamishra, Mahima Gulati, Aitor Egurtzegi, Ina Bornkessel-Schlesewsky, Martin Meyer, Balthasar Bickel, “Neural signatures of syntactic variation in speech planning”, PLOS Biology, 2021. https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3001038


Provided by NCCR

Stimulating Brain Pathways Shows Origins of Human Language and Memory (Neuroscience)

Scientists have identified that the evolutionary development of human and primate brains may have been similar for communication and memory.

Although speech and language are unique to humans, experts have found that the brain’s pathway is similarly wired in monkeys which could signify an evolutionary process dating back at least 25 million years.

In a study, published in the journal Neuron, teams led by Newcastle University and the University of Iowa, compared auditory cortex information from humans and primates and found strong links.

Professor Chris Petkov, from Newcastle University’s Faculty of Medical Sciences, said: “Our language abilities help us to crystallise memories and make them vivid, such as ‘the singer sounded like a nightingale’.

“Therefore, it’s often thought that the human language and memory brain systems went through a substantial transformation during our recent evolutionary history, distinguishing us from every other living animal.

“We were astounded to see such striking similarity with other primates, and this discovery has substantial importance for science and neurological disorders.”

This discovery has substantial importance for science and neurological disorder. 

— Professor Chris Petkov

Stimulating auditory cortex

Scientists used information from neurosurgery patients being monitored for treatment. With humans, stimulation of a specific part of the brain can be visualized if brain imaging is used at the same time.

The experts also compared the results from stimulating auditory cortex and visualising areas important for language and memory in monkeys.

The brain stimulation highlighted a previously unseen ancestral brain highway system that is deeply shared by humans and monkeys, one that is likely to have been present in ancestral primates to both species.

The finding is important because brain stimulation is a common treatment for neurological and psychiatric disorders. However, how brain stimulation works is not well understood and requires work that cannot be conducted with humans. Work with non-human primates has paved the way for current brain treatments, including Parkinson’s disease.

Inspiring new research

The study has generated unique new brain scanning information that can now be globally shared to inspire further discovery by the international scientific community.

Professor Matthew Howard III, chief neurosurgeon at the University of Iowa Carver Medical Center, USA, co-author of the study, said: “This discovery has tremendous potential for understanding how brain stimulation could help patients, which requires studies with animal models not possible to conduct with humans.”

Professor Tim Griffiths, consultant neurologist at Newcastle University, also co-author of the study, added: “This discovery has already inspired new research underway with neurology and neurosurgery patients.”


Reference

Common fronto-temporal effective connectivity in humans and monkeys. Francesca Rocchi et al. Neuron. DOI: 10.1016/j.neuron.2020.12.026


Provided by Newcastle University

The Evolution of Honesty (Psychology)

New research explains why we tend to tell the truth instead of lie.

As one who studies deception for a living, I am often asked why some people seem to lie a lot. That question always seemed like a fair one to me. We can all immediately recall examples of lying politicians, corrupt businessmen, conniving lovers, and duplicitous coworkers. These deceivers grab our attention because their dishonesty is so far outside the norms of society. We feel fortunate that these big liars are rare, with most people in our communities acting just like us—honest. But not long ago, another question crossed my mind: Why are most people so honest? Sure, there are some big liars out there, but most people are honest most of the time, even when being dishonest might help them get ahead. Why, when lying and deception often would allow one to gain an advantage, would honesty be such a central feature of human nature? The tendency to be honest with those around us seems so pervasive that it is almost as if the tendency is baked into human psychology along with sociality, curiosity, language, and other nearly universal traits. Researchers have recently begun to explore this question from an evolutionary perspective.

Competing Goals

When two individuals interact, they often have divergent goals. When buying a car, the salesperson and the customer have a shared goal of facilitating the transaction, but they also have subordinate goals that are diametrically opposed. The salesperson wants to sell the car for a high price, and the customer wants to purchase the car for a low price. Similarly, we can see these opposing goals in romantic relationships with each partner prioritizing some goals such as marriage, having kids, buying a new car, etc., that may come into conflict with the goals of the other partner. Once a conflict of interests emerges, each person may use a combination of strategies to protect their interests. They negotiate, argue, beg, make tradeoffs, share, fight, leave, compromise, etc. One such strategy is to deceive. If Sylvia wants to have an affair, but her spouse insists on monogamy, Sylvia can lie and say that she is spending the evening with friends. Likewise, if John wants the day off from work, and his boss wants him to work, John can simply lie and say that he is sick. People can use deception to gain the upper hand in a struggle for their own interests when those interests come into conflict with another’s.

Natural Honesty

But it turns out that most people are honest. In some of my own studies, I have found that most people report lying very rarely, even when there is no serious prospect of being caught if they lied. In laboratory studies on deception, people tend to honestly report their performance on a dice-rolling task, when lying would profit them financially. Even those people who do lie tend to do so in ways that fail to maximize their financial advantage. Even more perplexing, people who actually do come out ahead in such games sometimes lie in order to appear less successful than they actually were. So why do people behave so prosocially when there are clear financial incentives to lie? Some researchers have argued that we humans have evolved prosocial preferences, both for others and for ourselves. We have evolved to cooperate. Why would we evolve prosocial tendencies when Machiavellian tendencies (the tendency to exploit, manipulate, and deceive, others in order to achieve our own goals) would seem to obviously benefit one’s selfish interests? Evolutionary theory would argue that the honest approach must yield some adaptive advantage. That is, being prosocially honest must, in the end, produce more benefits to the individual than the Machiavellian approach.

Cooperation and Survival

The key to understanding this puzzle seems to be in the power of cooperation. Humans are a social species. There is pretty compelling evidence that without cooperation, people are much less likely to survive and thrive. By studying hunter-gatherers, anthropologists and evolutionary psychologists have discovered that our Paleolithic ancestors would be very unlikely to live long if they attempted to go it alone. In hunter-gatherer tribes, individuals are often left helpless by injuries and illness. Without cooperative help from others, these sick and hurt people would die of starvation. Likewise, without communal food gathering and sharing, each individual would not survive the feast and famine cycle of unpredictable hunting and gathering. Together, though, they share their bounty so that when each one inevitably has a bad day, the other members of the group will help them survive.

Evolution of Honesty

But before people will want to cooperate with you, they will need to know that you will reciprocate. People are vigilant observers of others’ behaviors. We note when someone is a cheat, when they are dishonest, or when they are cheap. We also notice when they share, when they pull their weight, and when they are genuine. We selectively cooperate with those who are themselves good cooperators. So, in order to be in productive cooperative relationships with others, we must demonstrate that we are good teammates. We do this by managing our reputations. We go out of our way to show people that we carry the credentials of a good cooperator. We showcase our warmth, our loyalty, and our honesty. The survival requisite of group living has driven people to place a premium on marketing themselves as reliable cooperators. But we need not even be consciously aware of this strategic cooperative machinery driving our behavior. Instead, we usually only notice the proximate gears that drive our behavior. We notice the guilt and shame we feel when we betray someone. We feel queasy when we fail to act fairly. We feel a loss of self-esteem when we recognize that we have been lying. Whether these drivers of honesty are hardwired into our brains or whether they are products of cultural evolution is still a matter of debate, but there does seem to be a compelling case that we humans, at least the majority of us, have evolved a tendency toward honesty, not deception.

References: (1) Heintz, C., Karabegovic, M., &, Molnar, A. (2016). The co-evolution of honesty and strategic vigilance. Frontiers in Psychology, 7, 1503. doi: 10.3389/fpsyg.2016.01503 (2) Henrich, J. (2018). Human cooperation: The hunter-gatherer puzzle. Current Biology, 28(19), 1143-1145. https://doi.org/10.1016/j.cub.2018.08.005. (3) Oesch, N. (2016). Deception as a derived function of language. Frontiers in Psychology, 7, 1485. doi:10.3389/fpsyg.2016.01485

This article is originally written by Christian L. Hart, who is a Professor of Psychology and Director of the Psychological Science program at Texas Woman’s University. This article is republished here from psychology today under common creative licenses

To The Brain, Reading Computer Code is Not The Same As Reading Language (Neuroscience)

Neuroscientists find that interpreting code activates a general-purpose brain network, but not language-processing centers.

In some ways, learning to program a computer is similar to learning a new language. It requires learning new symbols and terms, which must be organized correctly to instruct the computer what to do. The computer code must also be clear enough that other programmers can read and understand it.

New research suggests that reading computer code does not rely on the regions of the brain that are involved in language processing. Credits: Image: Jose-Luis Olivares, MIT

In spite of those similarities, MIT neuroscientists have found that reading computer code does not activate the regions of the brain that are involved in language processing. Instead, it activates a distributed network called the multiple demand network, which is also recruited for complex cognitive tasks such as solving math problems or crossword puzzles.

However, although reading computer code activates the multiple demand network, it appears to rely more on different parts of the network than math or logic problems do, suggesting that coding does not precisely replicate the cognitive demands of mathematics either.

“Understanding computer code seems to be its own thing. It’s not the same as language, and it’s not the same as math and logic,” says Anna Ivanova, an MIT graduate student and the lead author of the study.

Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience and a member of the McGovern Institute for Brain Research, is the senior author of the paper, which appears today in eLife. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and Tufts University were also involved in the study.

Language and cognition

A major focus of Fedorenko’s research is the relationship between language and other cognitive functions. In particular, she has been studying the question of whether other functions rely on the brain’s language network, which includes Broca’s area and other regions in the left hemisphere of the brain. In previous work, her lab has shown that music and math do not appear to activate this language network.

“Here, we were interested in exploring the relationship between language and computer programming, partially because computer programming is such a new invention that we know that there couldn’t be any hardwired mechanisms that make us good programmers,” Ivanova says.

There are two schools of thought regarding how the brain learns to code, she says. One holds that in order to be good at programming, you must be good at math. The other suggests that because of the parallels between coding and language, language skills might be more relevant. To shed light on this issue, the researchers set out to study whether brain activity patterns while reading computer code would overlap with language-related brain activity.

The two programming languages that the researchers focused on in this study are known for their readability — Python and ScratchJr, a visual programming language designed for children age 5 and older. The subjects in the study were all young adults proficient in the language they were being tested on. While the programmers lay in a functional magnetic resonance (fMRI) scanner, the researchers showed them snippets of code and asked them to predict what action the code would produce.

The researchers saw little to no response to code in the language regions of the brain. Instead, they found that the coding task mainly activated the so-called multiple demand network. This network, whose activity is spread throughout the frontal and parietal lobes of the brain, is typically recruited for tasks that require holding many pieces of information in mind at once, and is responsible for our ability to perform a wide variety of mental tasks.

“It does pretty much anything that’s cognitively challenging, that makes you think hard,” Ivanova says.

Previous studies have shown that math and logic problems seem to rely mainly on the multiple demand regions in the left hemisphere, while tasks that involve spatial navigation activate the right hemisphere more than the left. The MIT team found that reading computer code appears to activate both the left and right sides of the multiple demand network, and ScratchJr activated the right side slightly more than the left. This finding goes against the hypothesis that math and coding rely on the same brain mechanisms.

Effects of experience

The researchers say that while they didn’t identify any regions that appear to be exclusively devoted to programming, such specialized brain activity might develop in people who have much more coding experience.

“It’s possible that if you take people who are professional programmers, who have spent 30 or 40 years coding in a particular language, you may start seeing some specialization, or some crystallization of parts of the multiple demand system,” Fedorenko says. “In people who are familiar with coding and can efficiently do these tasks, but have had relatively limited experience, it just doesn’t seem like you see any specialization yet.”

In a companion paper appearing in the same issue of eLife, a team of researchers from Johns Hopkins University also reported that solving code problems activates the multiple demand network rather than the language regions.

The findings suggest there isn’t a definitive answer to whether coding should be taught as a math-based skill or a language-based skill. In part, that’s because learning to program may draw on both language and multiple demand systems, even if — once learned — programming doesn’t rely on the language regions, the researchers say.

“There have been claims from both camps — it has to be together with math, it has to be together with language,” Ivanova says. “But it looks like computer science educators will have to develop their own approaches for teaching code most effectively.”

The research was funded by the National Science Foundation, the Department of the Brain and Cognitive Sciences at MIT, and the McGovern Institute for Brain Research.

Provided by MIT

Research Finds Brains Work Harder While Processing Descriptions of Motion in other Languages (Neuroscience)

We all run from a burning building the same way—fast!—but how we describe it depends on the language we speak. In some languages, we might flee, race, or bolt, while in others we might just exit or leave the building quickly.

©Brain good

Different languages describe motion differently, according to distinct lexical rules. And though we may not consciously notice those rules, we follow them—and Georgia State researchers have found they affect how our brains perceive and process descriptions of physical movement.

Our brain has to work a little harder when we’re reading about physical movement in a way that is not typical in our native language, according to a new study by Şeyda Özçalışkan, an associate professor in the Department of Psychology at Georgia State University, former faculty member Christopher M. Conway, and Samantha Emerson, a former Georgia State University graduate student. Their study, “Semantic P600—but not N400—effects index crosslinguistic variability in speakers’ expectancies for expression of motion” was published recently in the journal Neuropsychologia.

“Physical movement always has the same key components, no matter where you live in the world, no matter what language you speak,” explains Özçalışkan. “But languages talk about it differently.”

Languages such as English, Polish, German and Dutch include the manner in which we move in the actual verb (we bolt, race, dawdle, sashay). But other languages, such as Spanish, Turkish, Japanese or Korean, add the manner at the end as a modifier (we enter rapidly, we ascend slowly).

Languages also differ in the way they describe the path of physical movement. Spanish includes path in the verb: we descend the mountain, or more dramatically, we descend the mountain arduously. But other languages, such as English and German, add the path after the motion: we crawl down the mountain.

The way we habitually express motion becomes internalized, says Özçalışkan, and likely affects how we perceive our world, particularly when speaking. And when we’re faced with another language’s unfamiliar description, our brain has to work harder momentarily. This can actually be measured with an electroencephalogram (EEG), a test used to evaluate the electrical activity in the brain. In their new study, the researchers tested motion descriptions in English and Spanish and found a surprising pattern.

“We found a very interesting pattern which in scientific language is called the P600 effect,” said Emerson, who now works at the Center for Childhood Deafness, Language, & Learning at Boys Town National Research Hospital in Omaha. “It is usually found in response to grammatical errors. We think that when you read a sentence structured in a way that is unfamiliar, you have to stop and go back and analyze it just as you would with a grammatical error.”

Then your brain has to “repair” those violations to your expectations, Emerson said. That means your brain has to exert a stronger electrical signal.

This is the first study that has examined actual neural activity in the brain in response to cross-linguistic differences related to motion. “And this study follows a long line of research on motion events at our lab at Georgia State University,” said Özçalışkan.

Previous studies in Özçalışkan’s laboratory have looked at how individuals use their hands when speaking, which also differs strongly across languages. In future studies, the researchers want to test a variety of the world’s languages, to see if the P600 pattern remains when describing physical movement. They also want to test Spanish and English bilingual speakers to see if there is still a P600 effect when one is fluent in both languages.

In the end, this kind of research can give us insights into how our brains interpret the world around us, influenced by language in ways we never would have expected or perhaps even noticed.

Reference: Samantha N. Emerson et al, Semantic P600—but not N400—effects index crosslinguistic variability in speakers’ expectancies for expression of motion, Neuropsychologia (2020). DOI: 10.1016/j.neuropsychologia.2020.107638 https://www.sciencedirect.com/science/article/pii/S0028393220303109?via%3Dihub

Provided by Georgia State University

How The Brain Distinguishes Fact From Possibility (Neuroscience)

Our brains respond to language expressing facts differently than they do to words conveying possibility, a team of neuroscientists has found. Its work offers new insights into the impact word choice has on how we make distinctions between what’s real vs. what’s merely possible.

A full-brain analysis revealed a significant effect for modal force, eliciting stronger activity for the factual condition over the modal conditions. ©Tulling et al., eNeuro 2020

“At a time of voluminous fake news and disinformation, it is more important than ever to separate the factual from the possible or merely speculative in how we communicate,” explains Liina Pylkkanen, a professor in NYU’s Department of Linguistics and Department of Psychology and the senior author of the paper, which appears in the journal eNeuro.

“Our study makes clear that information presented as fact evokes special responses in our brains, distinct from when we process the same content with clear markers of uncertainty, like ‘may’ or ‘might’,” adds Pylkkanen, also part of the NYU Abu Dhabi Institute.

“Language is a powerful device to effectively transmit information, and the way in which information is presented has direct consequences for how our brains process it,” adds Maxime Tulling, a doctoral candidate in NYU’s Department of Linguistics and the paper’s lead author. “Our brains seem to be particularly sensitive to information that is presented as fact, underlining the power of factual language.”

Researchers have long understood that the brain responds in a variety of ways to word choice. Less clear, however, are the distinctions it makes in processing language expressing fact compared to that expressing possibility. In the eNeuro study, the scientists’ primary goal was to uncover how the brain computes possibilities as conveyed by so-called “modal” words such as “may” or “might”—as in, “There is a monster under my bed” as opposed to, “There might be a monster under my bed.”

To explore this, the researchers used formal semantic theories in linguistics to design multiple experiments in which subjects heard a series of sentences and scenarios expressed as both fact and possibility—for example, “Knights carry large swords, so the squires do too” (factual) and “If knights carry large swords, the squires do too” (possible).

In order to measure the study subjects’ brain activity during these experiments, the researchers deployed magnetoencephalography (MEG), a technique that maps neural activity by recording magnetic fields generated by the electrical currents produced by our brain.

The results showed that factual language led to a rapid increase in neural activity, with the brain responding more powerfully and showing more engagement with fact-based phrases and scenarios compared to those communicating possibility.

“Facts rule when it comes to the brain,” observes Pylkkanen. “Brain regions involved in processing discourse rapidly differentiated facts from possibilities, responding much more robustly to factual statements than to non-factual ones. These findings suggest that the human brain has a powerful, perspective-adjusted neural representation of factual information and, interestingly, much weaker, more elusive cortical signals reflecting the computation of mere possibilities.”

“By investigating language containing clear indicators of possibility compared to factual utterances, we were able to find out which regions of the brain help to rapidly separate non-factual from factual language,” explains Tulling. “Our study thus illustrates how our choice of words has a direct impact on subconscious processing.”

Reference: Tulling et al., “Neural Correlates of Modal Displacement and Discourse-Updating Under (Un)Certainty, eNeuro, DOI: 10.1523/ENEURO.0290-20.2020

Provided by Society for Neuroscience