What do language, music and art have in common? These defining aspects of human intelligence are all the result of the faculty to ‘abstract’ – the unique ability of the human mind to organize information beyond the immediate sensory reality. As we constantly face billions of bits of information streaming in from our senses, our brain must recast this maelstrom into simpler structures. How does the brain do so? Research in artificial intelligence suggests that even the best of our current algorithms struggle to handle the complexity of everyday, real-life problems.
By using a combination of mathematical modeling, machine learning and brain imaging technology, researchers have discovered what happens in the brain when people use mental abstractions. In essence, the brain system that normally tracks economic value becomes very active and ‘talks’ to the system that processes visual information. These value signals, much decried as the basis for marketing strategies, actually serve a crucial aspect of our intelligence. Value is used by the brain to select information and create mental abstractions. The study, published in the journal eLife, could open the way to new advances in basic research, education and rehabilitation, the treatment of psychiatric disorders, as well as for the development of novel algorithms in artificial intelligence.
The international team tested people’s ability to solve decision problems presented on a computer screen, while inside an MR scanner. When participants responded correctly, they were given a small reward. The problems could be solved according to two strategies: an inefficient one based on all the information presented on the screen, and a better one that required mental abstractions. By analyzing the brain data with machine learning, the researchers found that when people used mental abstractions, this coincided with increased activity in the brain area that signals how valuable things are. In a second experiment the researchers used a novel neurofeedback technique to artificially change, directly in the brain, the value of some of the items used in the decision problems. After the manipulation, participants were more likely to use mental abstractions in those decision problems.
Dr. Aurelio Cortese, chief researcher at the Advanced Telecommunications Research Institute International, Kyoto, that led the team, said:
“This study is quite unique in its kind in that a high level, complex function like abstraction was studied with basic visual stimuli and simple decision problems. Yet, this simplicity led us directly to the underlying mechanism, helping resolve a long-standing question in the neuroscience literature: why do we see value signals in the brain literally all the time? Mental abstractions may be the key—we constantly need to think in abstract terms, since our world would be too complex otherwise.”
Dr. Mitsuo Kawato, director of the Computational Neuroscience Laboratories at ATR, Kyoto, was a co-author on the study, and explained the state-of-the-art neurofeedback manipulation: “With machine learning and advanced neuroimaging, we can now detect when, and if, a mental representation appears in the brain below the awareness threshold, in real time. When we do so—we give our participants a small reward. With time, that mental representation becomes paired with reward, or in terms of this experiment, with value. This way, we were able to ‘trick’ the brain into using these newly valuable mental representations to construct abstract thoughts.”
Dr. Benedetto De Martino, professor at University College London, Institute of Cognitive Neuroscience, was the senior author on the study and a leading expert in neuroeconomics: “The proposal that value—traditionally associated with its hedonic dimension (for example the value of a chocolate bar) – could be crucial for some aspects of our general intelligence is radical. Value may well be an abstraction in its own right. This research is part of our broader effort to understand the algorithmic nature of the human mind—and eventually translate this knowledge into new architectures in artificial intelligence, and lead to new treatments for psychiatric illnesses.”
Featured image: Learning task and behavioral results. (A) Task: participants learned the fruit preferences of pacman-like characters, which changed on each block. (B) Associations could form in three ways: color – stripe orientation, color – mouth direction, and stripe orientation – mouth direction. The left-out feature was irrelevant. Examples of the two types of fruit associations. The four combinations arising from two features with two levels were divided into symmetric (2×2) and asymmetric (3×1) cases. f1-3: features 1 to 3; fruit:rule refers to the fruit as being the association rule. Both block types were included to prevent participants from learning rules by simple deduction. If all blocks had symmetric association rules and participants knew this, they could simply learn one feature-fruit association (e.g. green-vertical), and from there deduce all other combinations. Both the relevant features and the association types varied on a block-by-block basis. (C), Trial-by-trial ratio-correct improved as a measure of within-block learning. Dots represent the mean across participants, while error bars indicate the SEM, and the shaded area represents the 95% CI (N = 33). Participant-level ratio correct was computed for each trial across all completed blocks. Source data is available in file Figure 1—source data 1. (D), Learning speed was positively correlated with time, among participants. Learning speed was computed as the inverse of the max-normalized number of trials taken to complete a block. Thin gray lines represent least square fits of individual participants, while the black line represents the group-average fit. The correlation was computed with group-averaged data points (N = 11). Average data points are plotted as colored circles, the error bars are the SEM. (E), Confidence judgements were positively correlated with learning speed, among participants. Each dot represents data from one participant, and the thick line indicates the regression fit (N = 31 [2 missing data]). The experiment was conducted once (n = 33 biologically independent samples), **p<0.01. Credit: DOI: 10.7554/eLife.68943
Reference: Aurelio Cortese et al, Value signals guide abstraction during learning, eLife (2021). DOI: 10.7554/eLife.68943
Provided by ATR Brain Information Communication Research Laboratory Group