Conspiracy theories are more topical than ever. But what influence do they have on our behavior? Scientists led by behavioral economist Loukas Balafoutas investigated this question in a recently published study. The result: We don’t need to believe in conspiracy theories for them to have an impact on us. Merely being confronted with them suffices.
Previous studies have shown that beliefs in conspiracy theories have an influence on the behavior of their adherents. For example, they lead to lower voter turnout or a lower willingness to get vaccinated. For years now, conspiracy theories have been experiencing a real boom – it is almost impossible to ignore them. This has prompted a research team led by Loukas Balafoutas to conduct a laboratory experiment to investigate whether conspiracy theories also have an impact on us when we do not believe in them and are only briefly confronted with them. “Our study shows that subjects who were exposed to a conspiracy theory for just three minutes acted differently in a subsequent behavioral experiment than subjects from the control group,” reports Loukas Balafoutas, Professor of Experimental Economics at the Department of Finance at the University of Innsbruck. The researchers were recently able to publish these results in the journal “Economic and Political Studies”.
Conspiracy theories change behavior
In the so-called EconLab of the University of Innsbruck, the researchers conducted their experiment before the COVID-19 pandemic. Half of the 144 participants in the study were shown a 3-minute video depicting the 1969 moon landing as a fake. The control group, on the other hand, watched an equally long video about the space shuttle program. Subsequently, the test persons participated in the so-called “money request game”. The players were divided into pairs and asked to make a simultaneous integer bid between 5 and 14 euros. Whoever made the smaller bid received the amount of that bid plus 10 euros; whoever made the larger bid received only the amount of the bid. In the event of a tie, both participants received exactly their bid. In this game, the best response to a bid larger than 5 euros from the other participant is to bid exactly one euro less. If the other participant bids 5 euros, the best response is to bid 14 euros. “In this experiment, we found that subjects who had previously watched the conspiracy theory video bid smaller amounts. This shows that these test persons act more strategically. On the one hand, this can possibly lead to a higher profit in the game, but at the same time this approach also carries the risk of incurring a loss,” explains Balafoutas. “So our aim here is not to evaluate this behavior as better or worse, but simply to show that people who were exposed to a conspiracy theory shortly beforehand display different behavior than the control group in a subsequent situation that is completely different in terms of content. From this we conclude that the conspiracy theory has an influence on how someone perceives the world and other people,” Balafoutas continues.
In another experiment, the so-called “trust game,” the researchers tested the extent to which exposure to a conspiracy theory leads to an impairment of trust toward others. In this game, players were divided into pairs. In each pair, both players received 5 euros. One of the players (A) could decide to invest part or all of the amount. The invested amount was tripled and given to the other player (B), who could then transfer part of the money back to player A – but did not have to. Larger amounts invested by A in this game correspond to a higher level of trust. “It is quite a positive message that we did not find any negative influence of the conspiracy theory here. Trust in the other person was statistically the same in both groups. That’s important, because in our society we need a certain level of trust for it to function at all,” Balafoutas says.
That the scientists studied conspiracy theories in the lab is no coincidence. “As researchers, we don’t want to contribute to spreading conspiracy theories into society. Therefore, caution is always required in such studies. They must be carried out in an ethically justifiable manner and must also be approved in advance. It is particularly important to debrief the test subjects after such an experiment,” explains Loukas Balafoutas. Links
What was your favourite childhood toy? A car? A teddy bear? A doll? Many of us have fond memories of playing with dolls: dressing them up, combing their hair or doing some kind of role play with other toys.
But new research shows that playing with ultra-thin dolls could make young girls want a thinner body.
The small-scale study, led by Durham’s Psychology Department, shows that ultra-thin dolls may negatively affect body image in girls as young as five years old.
The researchers warn that the dolls, combined with exposure to ‘thin ideals’ in films, on TV and social media, could lead to body dissatisfaction in young girls, which has been shown to be a factor in the development of eating disorders.
In the research, thirty girls aged between 5-9 years old played with an ultra-thin doll, a realistic childlike doll or a car. Before and after each play session, the girls were asked about their perceived own body size and ideal body size via an interactive computer test using pictures.
Playing with the ultra-thin dolls reduced girls’ ideal body size in the immediate aftermath of play. And there was no improvement when they subsequently played with the childlike dolls or cars afterwards, showing that the effects cannot be immediately counteracted with other toys. The realistic children’s dolls were relatively neutral for girls’ body ideals.
The vast majority of the girls who took part in the study had access to ultra-thin dolls at home or with their friends and almost all of them also watched Disney and related films, which also tend to portray very thin female bodies.
In the study, the girls played with the dolls in pairs and before and after their play session, they were asked to change the body size in a picture of a girl to what they thought they looked like themselves, what they would like to look like and what they thought a beautiful woman looks like.
The experimental study contributes to a growing number of studies which show that doll play may affect the beauty ideals that young girls internalise.
Current widely available dolls tend to have ultra-thin bodies with a projected body mass index between 10 and 16 which is classed as underweight. Realistic childlike dolls used in the study resembled healthy 7 and 9-year old children. The research was conducted independently from doll manufacturers.
Incorporating social and behavioral factors alongside biological mechanisms is critical for improving aging research, according to a trio of studies by leading social scientists
A trio of recent studies highlight the need to incorporate behavioral and social science alongside the study of biological mechanisms in order to slow aging.
The three papers, published in concert in Ageing Research Reviews, emphasized how behavioral and social factors are intrinsic to aging. This means they are causal drivers of biological aging. In fact, the influence of behavioral and social factors on how fast people age are large and meaningful. However, geroscience–the study of how to slow biological aging to extend healthspan and longevity–has traditionally not incorporated behavioral or social science research. These papers are by three pioneers in aging research and members of the National Academy of Medicine who study different aspects of the intersection of biology and social factors in shaping healthy aging through the lifespan.
Improving translation of aging research from mice to humans
Exciting biological discoveries about rate of aging in non-human species are sometimes not applicable or lost when we apply them to humans. Including behavioral and social research can support translation of geroscience findings from animal models to benefit humans, said Terrie Moffitt, the Nannerl O. Keohane University Professor of Psychology and Neuroscience at Duke University.
“The move from slowing fundamental processes of aging in laboratory animals to slowing aging in humans will not be as simple as prescribing a pill and watching it work,” Moffitt said. “Compared to aging in laboratory animals, human aging has many behavioral/social in addition to cellular origins and influences. These influences include potential intervention targets that are uniquely human, and therefore are not easily investigated in animal research.”
Several of these human factors have big impacts on health and mortality: stress and early life adversity, psychiatric history, personality traits, intelligence, loneliness and social connection, and purpose in life are connected to a variety of late-life health outcomes, she explained. These important factors need to be taken into account to get a meaningful prediction of human biological aging.
“Geroscience can be augmented through collaboration with behavioral and social science to accomplish translation from animal models to humans, and improve the design of clinical trials of anti-aging therapies,” Moffitt said. “It’s vital that geroscience advances be delivered to everyone, not just the well-to-do, because individuals who experience low education, low incomes, adverse early-life experiences, and prejudice are the people who age fastest and die youngest.”
Social factors associated with poor aging outcomes
“Social hallmarks of aging” can be strongly predictive of age-related health outcomes – in many cases, even more so than biological factors, said USC University Professor and AARP Chair in Gerontology Eileen Crimmins. While the aging field commonly discusses the biological hallmarks of aging, we don’t tend to include the social and behavioral factors that lead to premature aging. Crimmins has called the main five factors below “the Social Hallmarks of aging” and poses that these should not be ignored in any sample of humans and the concepts should be incorporated where possible into non-human studies.
Crimmins examined data that was collected in 2016 from the Health and Retirement Study, a large, nationally representative study of Americans over the age of 56 that incorporates both surveys regarding social factors and biological measurements, including a blood sample for genetic analysis. For the study, she focused the five social hallmarks for poor health outcomes:
low lifetime socioeconomic status, including lower levels of education
adversity in childhood and adulthood, including trauma and other hardships
being a member of a minority group
adverse health behaviors, including smoking, obesity and problem drinking
adverse psychological states, such as depression, negative psychological outlook and chronic stress
The presence of these five factors were strongly associated with older adults having difficulty with activities of daily living, experiencing problems with cognition, and multimorbidity (having five or more diseases). Even when controlling for biological measurements – including blood pressure, genetic risk factors, mitochondrial DNA copy number and more – the social differences, as well as demographic factors such as age and gender, explained most of the differences in aging outcomes between study subjects, she said. However, biological and social factors aren’t completely independent from one another, Crimmins added, which is why she advocates for further incorporation of social and behavioral factors in aging biology research.
“Variability in human aging is strongly related to the social determinants of aging; and it remains so when extensive biology is introduced as mediating factors. This means that the social variability in the aging process is only partly explained by the biological measures researchers currently use,” she said. “Our hypothesis is that if we could fully capture the basic biological mechanisms of aging, they would even more strongly explain the social variability in the process of aging, as social factors need to ‘get under the skin’ through biology.”
Understanding stress and stress resilience
Elissa Epel, professor and vice chair in the Department of Psychiatry and Behavioral Sciences at UC San Francisco, detailed how research on stress and resilience needs to incorporate psychosocial factors in order to understand how different kinds of stress affect aging. Not all types of stress are equal and in fact some are salutary.
The social hallmarks of aging can shape the rate of aging in part through toxic stress responses, she said. While acute responses to minor or moderate stressors, including infection or injury, is critical to survival, chronic exposure to high amounts of stress–including long-term psychological stressors such as abuse–can prove toxic and result in poor health outcomes.
“Brief, intermittent, low-dose stressors can lead to positive biological responses, improving resistance to damage, which is called hormesis,” Epel explained. For example, physiological hormetic stressors include short term exposure to cold, heat, exercise, or hypoxia. Hormetic stress turns on mechanisms of cell repair and rejuvenation. “In contrast, a high dose of a chronic exposure can override these mechanisms, resulting in damage or death,” she added. Thus, toxic stress can accelerate biological aging processes, whereas hormetic stress can slow aging.
However, the types, timing, and frequency of hormetic stress need to be better delineated in order to be useful to human aging research and interventions, Epel said.
“Stress resilience, an umbrella term including hormetic stress, can be measured across cellular, physiological, and psychosocial functioning,” she said. “Developing a deeper understanding of stress resilience will lead to more targeted innovative interventions.” Stress resilience can also include social interventions that protect from the malleable social hallmarks of aging, including safe neighborhoods to reduce trauma and violence, and social support programs to combat loneliness and depression.
Geroscience is now more important than ever, both to our aging global demography but also to the health challenges we face going forward, and stress resilience is an especially important topic at the moment, Epel added. “In our new era, we have dramatically increasing temperature extremes, wildfires and small particle pollution, and new zoonotic viruses to contend with intermittently,” she said. “Reducing social disparities, improving stress resilience and bolstering immune function have become critical public health goals.”
In sum, the three papers together point to a promising decade ahead for aging research.
Humans, as complex social mammals, age together in response to social conditions and behavioral factors that are partly malleable. Epel explains “As we discover and test biological processes of aging that we can manipulate, we can do this in tandem with capitalizing on the natural levers of healthy aging that are powerful, interactive, and cannot be ignored. In this way, the fountain of youth becomes more attainable.”
“Behavioral and Social Research to Accelerate the Geroscience Translation Agenda” by Terrie E. Moffitt was supported by the National Institute on Aging (AG032282, R01 AG049789) and the U.K. Medical Research Council (P005918). “Social hallmarks of aging: Suggestions for geroscience research” by Eileen Crimmins was funded by grants from the National Institute on Aging (U01 AG009740, P30 AG017265, and R01 AG AG060110). “The geroscience agenda: Toxic stress, hormetic stress, and the rate of aging” by Elissa Epel was funded by National Institute on Aging grant R24 AG048024.
The paper explores how to reverse this potentially violent form of addiction by restoring an individual’s psychological needs and how challenging their ideology is counterproductive
Learning more about what motivates people to join violent ideological groups and engage in acts of cruelty against others is of great social and societal importance. New research from Assistant Professor of Psychology at NYUAD Jocelyn Bélanger explores the idea of ideological obsession as a form of addictive behavior that is central to understanding why people ultimately engage in ideological violence, and how best to help them break this addiction.
The first is moral disengagement: ideological obsession deactivates moral self-regulation processes, which allows unethical behaviors to happen without self-recrimination. The second is hatred: ideologically obsessed individuals are ego-defensive and easily threatened by information that criticizes their beliefs, which leads to greater hatred and potentially violent retaliation. Third, ideological obsession changes people’s social interactions, causing them to gravitate toward like-minded people – networks — who support their violent thinking. And finally, these individuals are prone to psychological reactance, which makes them immune to communications that attempt to dissuade them from violence.
“As we seek ways to prevent and combat violent radicalization, we must understand this behavior as an addiction to an ideology, rooted in a feeling of absence of personal significance,” said Belanger. “Common approaches, like trying to provide information that counters someone’s hateful ideology, are not only futile, but often counterproductive. To steer people away from ideologically-motivated violence, we must focus on their psychological needs, such as meaning and belonging, and helping them attain richer, more satisfying, and better-balanced lives.”
Reference: Jocelyn J. Bélanger, “The sociocognitive processes of ideological obsession: review and policy implications”, Royal Society Publishing, 22 February 2021. https://doi.org/10.1098/rstb.2020.0144
Provided by New York University
About NYU Abu Dhabi
NYU Abu Dhabi is the first comprehensive liberal arts and science campus in the Middle East to be operated abroad by a major American research university. NYU Abu Dhabi has integrated a highly-selective liberal arts, engineering and science curriculum with a world center for advanced research and scholarship enabling its students to succeed in an increasingly interdependent world and advance cooperation and progress on humanity’s shared challenges. NYU Abu Dhabi’s high-achieving students have come from more than 115 nations and speak over 115 languages. Together, NYU’s campuses in New York, Abu Dhabi, and Shanghai form the backbone of a unique global university, giving faculty and students opportunities to experience varied learning environments and immersion in other cultures at one or more of the numerous study-abroad sites NYU maintains on six continents.
Boys who regularly play video games at age 11 are less likely to develop depressive symptoms three years later, finds a new study led by a UCL researcher.
The study, published in Psychological Medicine, also found that girls who spend more time on social media appear to develop more depressive symptoms.
Taken together, the findings demonstrate how different types of screen time can positively or negatively influence young people’s mental health, and may also impact boys and girls differently.
Lead author, PhD student Aaron Kandola (UCL Psychiatry) said: “Screens allow us to engage in a wide range of activities. Guidelines and recommendations about screen time should be based on our understanding of how these different activities might influence mental health and whether that influence is meaningful.
“While we cannot confirm whether playing video games actually improves mental health, it didn’t appear harmful in our study and may have some benefits. Particularly during the pandemic, video games have been an important social platform for young people.
“We need to reduce how much time children – and adults – spend sitting down, for their physical and mental health, but that doesn’t mean that screen use is inherently harmful.”
Kandola previously led studies finding that sedentary behaviour (sitting still) appeared to increase the risk of depression and anxiety in adolescents.* To gain more insight into what drives that relationship, he and colleagues chose to investigate screen time as it is responsible for much of sedentary behaviour in adolescents. Other studies have found mixed results, and many did not differentiate between different types of screen time, compare between genders, or follow such a large group of young people over multiple years.
The research team from UCL, Karolinska Institutet (Sweden) and the Baker Heart and Diabetes Institute (Australia) reviewed data from 11,341 adolescents who are part of the Millennium Cohort Study, a nationally representative sample of young people who have been involved in research since they were born in the UK in 2000–2002.
The study participants had all answered questions about their time spent on social media, playing video games, or using the internet, at age 11, and also answered questions about depressive symptoms, such as low mood, loss of pleasure and poor concentration, at age 14. The clinical questionnaire measures depressive symptoms and their severity on a spectrum, rather than providing a clinical diagnosis.
In the analysis, the research team accounted for other factors that might have explained the results, such as socioeconomic status, physical activity levels, reports of bullying, and prior emotional symptoms.
The researchers found that boys who played video games most days had 24% fewer depressive symptoms, three years later, than boys who played video games less than once a month, although this effect was only significant among boys with low physical activity levels, and was not found among girls. The researchers say this might suggest that less active boys could derive more enjoyment and social interaction from video games.
While their study cannot confirm if the relationship is causal, the researchers say there are some positive aspects of video games which could support mental health, such as problem-solving, and social, cooperative and engaging elements.
There may also be other explanations for the link between video games and depression, such as differences in social contact or parenting styles, which the researchers did not have data for. They also did not have data on hours of screen time per day, so they cannot confirm whether multiple hours of screen time each day could impact depression risks.
The researchers found that girls (but not boys) who used social media most days at age 11 had 13% more depressive symptoms three years later than those who used social media less than once a month, although they did not find an association for more moderate use of social media. Other studies have previously found similar trends, and researchers have suggested that frequent social media use could increase feelings of social isolation.
Screen use patterns between boys and girls may have influenced the findings, as boys in the study played video games more often than girls and used social media less frequently.
The researchers did not find clear associations between general internet use and depressive symptoms in either gender.
Senior author Dr Mats Hallgren (Karolinska Institutet) has conducted other studies in adults finding that mentally-active types of screen time, such as playing video games or working at a computer, might not affect depression risk in the way that more passive forms of screen time appear to do.
He said: “The relationship between screen time and mental health is complex, and we still need more research to help understand it. Any initiatives to reduce young people’s screen time should be targeted and nuanced. Our research points to possible benefits of screen time; however, we should still encourage young people to be physically active and to break up extended periods of sitting with light physical activity.”
Featured image: Video gaming. Credit: ulricaloeb on Flickr (CC BY 2.0)
Reference: Kandola, A., Owen, N., Dunstan, D., & Hallgren, M. (2021). Prospective relationships of adolescents’ screen-based sedentary behaviour with depressive symptoms: The Millennium Cohort Study. Psychological Medicine, 1-9. doi: 10.1017/S0033291721000258
According to recent study, dogs can learn new words after hearing them only four times.
A new study found that talented dogs can learn new words after hearing them only four times.
While preliminary evidence seems to show that most dogs do not learn words (i.e. names of objects), unless eventually very extensively trained, a few individuals have shown some exceptional abilities.
The Family Dog Project research team at the Department of Ethology, Eötvös Loránd University, Budapest is investigating on these exceptionally talented dogs who seem to learn words in the absence of any formal training, but simply by being exposed to playing with their owners in the typical way owners do, in a human family.
Video abstract of the study:
A new study, just published in Scientific Reports, has provided surprising results about how quickly the gifted dogs can learn new words. Two gifted dogs, a Border Collie named Whisky, from Norway, already famous for her spontaneous categorization skills and a Yorkshire terrier named Vicky Nina, from Brazil, participated in this experiment. Their ability to learn a new word after hearing it only four times was tested.
While it is natural to think that dogs, like human children, would learn words mostly in a social context, previous studies tested the ability of talented dogs to learn object names during an exclusion-based task. In such task the dog was confronted with a setup in which seven familiar, already named dog toys were present, together with a novel one and his ability to choose the novel object upon hearing a novel name was tested.
“We wanted to know under which conditions the gifted dogs may learn novel words. To test this, we exposed Whisky and Vicky Nina to the new words in two different conditions” explains Claudia Fugazza, first author of the study, “during an exclusion-based task and in a social playful context with their owners. Importantly, in both conditions the dogs heard the name of the new toy only 4 times”.
In the exclusion-based task, the dogs showed that they were able to select the new toy when their owner spoke a new name, confirming that dogs can choose by exclusion – i.e., excluding all the other toys because they already have a name, and selecting the only one that does not. However, this was not the way they would learn the name of the toy. In fact, when tested on their ability to recognize the toy by its name, as this was confronted with another equally novel name, the dogs failed.
The other condition, the social one, where the dogs played with their owners who pronounced the name of the toy while playing with the dog, proved to be the successful way to learn the name of the toy, even after hearing it only 4 times. Whisky and Vicky Nina were able to select the toys based on their names when they had learned the names this way.
“Such rapid learning seems to be similar to the way human children acquire their vocabulary around 2-3 years of age”, comments Adam Miklósi, head of the Department of Ethology and co-author of the study.
To test whether most dogs would learn words this way, other 20 dogs were tested in the same condition, but none of them showed any evidence of learning the toy names, confirming that the capacity to learn words rapidly, in the absence of formal training is very rare and is only present in few gifted dogs.
After such few exposures, however, Whisky’s and Vicky Nina’s memory of the learned words decayed quite fast. While in the first test, conducted a couple of minutes after hearing the toy names, the dogs were successful, they did not succeed in most of the tests conducted after 10 minutes and 1 hour.
To find out more about the number of words that the gifted dogs can learn in a short timeframe, the researchers of Eötvös Loránd University have also recently launched the Genius Dog Challenge a project that became viral in the social media.
Vicky Nina, unfortunately, passed away in the meantime and could not take part in the Genius Dog Challenge. Whisky is participating in it, together with other five talented dogs that the scientists found all over the world in the past two years of search.
Like a parent being pestered with endless questions from a young child, most people will now and again find themselves following an infinite chain of cause and effect when considering what led to some particular event. And while many factors can contribute to an event, we often single out only a few as its causes. So how do we decide?
That’s the topic of a recent paper by Tadeg Quillien, a doctoral student in the Department of Psychological and Brain Sciences. The study, published in the journal Cognition, outlines how a factor’s role in an event influences whether or not we consider it to be a cause of that event.
In his paper, Quillien constructs a mathematical model of causal judgment that reproduces people’s intuitions better than any previous model. And in addition to providing theoretical insights, understanding how we reason about causality has major implications for how we approach problems overall.
Intuitively speaking, the event that has the strongest role in determining an outcome is generally considered its cause. In fact, philosophers and psychologists have observed humans ranking the causes of an event in different studies. For instance, if a match is found at the scene of a forest fire, people usually say the match caused the blaze, even though the oxygen in the air was also necessary for the fire to start.
“But what do we mean by ‘the strongest role’?” Quillien asked. “This is still a very hazy notion, and making it more precise has, for decades, been a source of headaches for philosophers and psychologists trying to understand causal judgment.”
Quillien approached this question by considering what evolutionary purpose our causal reasoning serves. “At least one of the functions of causal judgment is to highlight the factors that are most useful in predicting an outcome,” Quillien proposed, “as well as the factors that you can manipulate to affect the outcome.”
The process reminded him of a scientist seeking to understand how different phenomena are related. Scientists can run controlled experiments with many different cases to quantify correlations and determine an effect size, which is the association between one variable and another.
But if we accept that this is what the mind is trying to do, a problem arises. Scientists rely on many observations before arriving at a judgment. They can’t compute an effect size from a single occurrence. And yet, people generally have no trouble making one-off causal judgements.
Quillien believes that this paradox can be resolved with the following hypothesis. When people make a causal judgment, they are unconsciously imagining the different ways that an event could have unfolded. “These counterfactuals give you the data that you need in order to compute this measure of effect size,” he said.
Guided by these ideas, Quillien designed a simple mathematical model of how people make causal judgments. To test his model, he analyzed data from an experiment conducted by Harvard psychologist Adam Morris and his colleagues. The experiment used a lottery game to explore the effect of probability and logical structure on people’s causal intuitions.
“The probability of events affects our sense of causation in a strange way,” Quillien explained. Say a professor, Carl, wants funding for a project. His request is reviewed by his department chairs, Alice and Bill, both of whom have to approve it. Alice approves nearly every application, but Bill is notorious for rejecting most of them. The question is, if Carl receives his funding, who’s most responsible?
Most people would say Bill caused Carl’s request to be approved, since getting his endorsement has more bearing, in general, on receiving funding.
However, change just one detail, and people’s intuitions flip. If Carl only needs the approval of one or the other of his colleagues, and still gets both, then people attribute Carl’s funding to Alice. In this case, her more reliable support was the strongest factor in whether Carl’s project was funded.
In their experiment, Morris and his colleagues were able to precisely quantify this effect that an event’s probability had on people’s causal judgment. Their conclusion was surprising, and no psychological theory at the time could explain their results, Quillien said.
When he re-analyzed their data, Quillien found that his mathematical model closely matched how Morris’s participants had assigned causality to the various events. In fact, it matched the data better than any other model to date.
The results highlight how probability and logical structure together inform our causal intuition. When both votes are necessary for Carl to get funding, it will happen only if the most stringent committee member is on board. As a result, people attribute a positive outcome to the less likely vote. By contrast, in situations where a single vote is enough, the approval of the more permissive faculty member is what most often determines the outcome. “We are attuned to causes that tend to co-occur with the effects,” Quillien said.
The way in which we reason about causality has practical implications. Consider the example of the forest fire again. Fires need three things to burn: oxygen, fuel and an ignition source. But our minds don’t give these factors equal weight.
“While we may not have an exact model of how forest fires work, we still have this sense that oxygen is there all the time, and the forests are not always on fire,” Quillien said. “So the correlation between oxygen and fire is relatively low.” The same reasoning applies to the fuel, namely the wood in the trees. But introduce a match to the equation, and the forest is much more likely to catch ablaze.
The method of causal judgment that Quillien outlines in his work is good at guiding us toward the match: a factor with high predictive power that we might even be able to control. However, our intuition can sometimes lead us astray when we try to gain a more complete understanding of the world.
“If you want a deep understanding of how fire works, you need to factor in the role of oxygen,” Quillien said. “But if your intuitive sense of causation is screaming at you that oxygen does not matter, then that might lead you to ignore some of the important factors in the world.”
Causal reasoning is a ubiquitous feature of cognition, and Quillien plans to further investigate how our sense of causation influences other aspects of our psychology and worldview. “We explain almost everything in terms of cause and effect,” he said. “As a consequence, many of the concepts that we use to make sense of the world have causation as a building block.”
“If we can understand the concept of causation, then we can potentially understand the way a lot of other concepts work as well.”
While the world is eagerly waiting for COVID-19 vaccines to bring an end to the pandemic, wearing a mask to help prevent viral transmission has become more or less mandatory globally. Though many people embrace mask wearing and adhere to public health advice, some rebel and argue that wearing a mask has been imposed upon them against their will.
With mask wearing and social distancing, it’s down to the individual to decide whether or not to comply, yet what influences compliance isn’t straightforward. Demographic factors such as income level, political affiliation and gender have all been associated with whether people choose to wear a mask and socially distance.
However, psychology can go some way to explaining why behavioural differences occur. Past research has shown that psychological factors such as an individual’s perception of risk and tendency towards risky behaviour influence adherence to health behaviours. This is now being seen in the current pandemic.
One preprint study (yet to be peer reviewed) has shown that a greater propensity for risky decision-making goes hand in hand with being less likely to appropriately wear a mask or maintain social distancing. In another piece of research, perceptions of the risk of COVID-19 are cited as a driver of whether people decide to socially distance.
And there may also be a further psychological explanation: the phenomenon of “psychological reactance”. This is where people vehemently believe they have freedom to behave how they wish, and experience negative emotions when this freedom is threatened, and so become motivated to reinstate it.
This means that when told to wear a mask and socially distance, some people may perceive their behavioural freedom to be under threat. Anger and other negative emotions then follow. To reduce these uncomfortable feelings, these individuals may then attempt to restore their freedom by not complying with the advice.
Just as psychology can help explain why people may reject masks, it can also offer guidance on how to get people to accept them. A variety of techniques from social psychology can be used to persuade people to comply with health advice such as mask wearing, social distancing and self-isolating.
One key persuasion method is portraying consensus. When you show people that an attitude is shared (or not) by others, they are more likely to adopt it. Seeing someone wearing a mask makes it more likely that others will do the same. Persuasion strategies could therefore focus on making sure that people perceive mask wearing as widespread – perhaps by depicting it frequently in the media or by making it mandatory in certain places.
We also know from previous studies that people are more likely to comply with public health guidelines if they are clear, precise, simple and consistent – and if they trust the source from which they come.
But the effectiveness of these sorts of “one-size-fits-all” approaches to persuasion and behavioural change are likely to be limited. Initial findings in the area of personalised persuasion suggest it might be more effective to try bespoke approaches for people, based on combinations of their key characteristics (their “psychographic profiles”).
For example, in a recent piece of non-COVID research we identified three main personality profiles. Those who are more shy, socially inhibited and anxious tend to report being more likely to be persuaded by those in authority, whereas those who are more self-oriented and manipulative tend to feel the opposite; they report being less likely to be influenced by authority figures.
Moreover, those in the third group – who are agreeable, extroverted and conscientious – report being more likely to be persuaded to do something if it is consistent with what they have done before, and less likely if it requires them to change their position. This means if they have decided in the past that wearing masks is a bad thing, they’re more likely to resist any subsequent efforts to make them wear one.
A recent article concluded that shouting at people to wear masks won’t help, and this research into personalised persuasion backs this up. Only those in the shy and anxious group would be likely to respond well to such a direct and heavy-handed tactic. A far better strategy would be to try an empathetic approach that seeks to understand the varying motivations of different groups of people – including whether there is psychological reactance at play – and then tailor messages to individuals accordingly.
This article is republished here from the conversation under common creative licenses. To read original click here.
Time appears to have a direction, to be inherently directional: the past lies behind us and is fixed and immutable, and accessible by memory or written documentation; the future, on the other hand, lies ahead and is not necessarily fixed, and, although we can perhaps predict it to some extent, we have no firm evidence or proof of it. Most of the events we experience are irreversible: for example, it is easy for us to break an egg, and hard, if not impossible, to unbreak an already broken egg. It appears inconceivable to us that that this progression could go in any other direction. This one-way direction or asymmetry of time is often referred to as the arrow of time, and it is what gives us an impression of time passing, of our progressing through different moments. The arrow of time, then, is the uniform and unique direction associated with the apparent inevitable “flow of time” into the future.
The idea of an arrow of time was first explored and developed to any degree by the British astronomer and physicist Sir Arthur Eddington back in 1927, and the origin of the phrase is usually attributed to him. What interested Eddington is that exactly the same arrow of time would apply to an alien race on the other side of the universe as applies to us. It is therefore nothing to do with our biology or psychology, but with the way the universe is. The arrow of time is not the same thing as time itself, but a feature of the universe and its contents and the way it has evolved.
Is the Arrow of Time an Illusion?
In order to know you must have knowledge of Relativity or Relativistic Time. If you a beginner, let me tell you, according to the Theory of Relativity, the reality of the universe can be described by four-dimensional space-time, so that time does not actually “flow”, it just “is”. The perception of an arrow of time that we have in our everyday life therefore appears to be nothing more than an illusion of consciousness in this model of the universe, an emergent quality that we happen to experience due to our particular kind of existence at this particular point in the evolution of the universe.
Perhaps even more interesting and puzzling is the fact that, although events and processes at the macroscopic level – the behaviour of bulk materials that we experience in everyday life – are quite clearly time-asymmetric (i.e. natural processes DO have a natural temporal order, and there is an obvious forward direction of time), physical processes and laws at the microscopic level, whether classical, relativistic or quantum, are either entirely or mostly time-symmetric. If a physical process is physically possible, then generally speaking so is the same process run backwards, so that, if you were to hypothetically watch a movie of a physical process, you would not be able to tell if it is being played forwards or backwards, as both would be equally plausible.
In theory, therefore, most of the laws of physics do not necessarily specify an arrow of time. There is, however, an important exception: the Second Law of Thermodynamics.
Thermodynamic Arrow of Time
Most of the observed temporal asymmetry at the macroscopic level – the reason we see time as having a forward direction – ultimately comes down to thermodynamics, the science of heat and its relation with mechanical energy or work, and more specifically to the Second Law of Thermodynamics. This law states that, as one goes forward in time, the net entropy (degree of disorder) of any isolated or closed system will always increase (or at least stay the same).
The concept of entropy and the decay of ordered systems was explored and clarified by the German physicist Ludwig Boltzmann in the 1870s, building on earlier ideas of Rudolf Clausius, but it remains a difficult and often misunderstood idea. Entropy can be thought of, in most cases, as meaning that things (matter, energy, etc) have a tendency to disperse. Thus, a hot object always dissipates heat to the atmosphere and cools down, and not vice versa; coffee and milk mix together, but do not then separate; a house left unattended will eventually crumble away, but a pile of bricks never spontaneously forms itself into a house; etc. However, as discussed below, it is not quite as simple as that, and a better way of thinking of it may be as a tendency towards randomness.
It should be noted that, in thermodynamic systems that are NOT closed, it is quite possible that entropy can decrease with time (e.g. the formation of certain crystals; many living systems, which may reduce local entropy at the expense of the surrounding environment, resulting in a net overall increase in entropy; the formation of isolated pockets of gas and dust into stars and planets, even though the entropy of the universe as a whole continues to increase; etc). Any localized or temporary instances of order within the universe are therefore in the nature of epiphenomena within the overall picture of a universe progressing inexorably towards disorder.
It is also perhaps counter-intuitive, but nevertheless true, that overall entropy actually increases even as large-scale structure forms in the universe (e.g. galaxies, clusters, filaments, etc), and that dense and compact black holes have incredibly high entropy, and actually account for the overwhelming majority of the entropy in today’s universe. Likewise, the relatively smooth configuration of the very early universe (see the section on Time and the Big Bang) is actually an indication of very low overall entropy (i.e. high entropy does not necessarily imply smoothness: random “lumpiness”, like in our current universe, is actually a characteristic of high entropy).
Most of the processes that appear to us to be irreversible in time are those that start out, for whatever reason, in some very special, highly-ordered state. For example, a new deck of cards are in number order, but as soon as we shuffle them they become disordered; an egg is a much more ordered state than a broken or scrambled egg; etc. There is nothing in the laws of physics that prevents the act of shuffling a deck of cards from producing a perfectly ordered set of cards – there is always a chance of that, it is just a vanishingly small chance. To give another example, there are many more possible disordered arrangements of a jigsaw than the one ordered arrangement that makes a complete picture. So, the apparent asymmetry of time is really just an asymmetry of chance – things evolve from order to disorder not because the reverse is impossible, but because it is highly unlikely. The Second Law of Thermodynamics is therefore more a statistical principle than a fundamental law (this was Boltzmann’s great insight). But the upshot is that, provided the initial condition of a system is one of relatively high order, then the tendency will almost always be towards disorder.
Thermodynamics, then, appears to be one of the only physical processes that is NOT time-symmetric, and so fundamental and ubiquitous is it in our universe that it may be single-handedly responsible for our perception of time as having a direction. Indeed, several of the other arrows of time noted below (arguably) ultimately come back to the asymmetry of thermodynamics. Indeed, so clear is this law that the measurement of entropy has been put forward a way of distinguishing the past from the future, and the thermodynamic arrow of time has even been put forward as the reason we can remember the past but not the future, due to the fact that the entropy or disorder was lower in the past than in the future.
Cosmological Arrow of Time
It has been argued that the arrow of time points in the direction of the universe’s expansion, as the universe continues to grow bigger and bigger since its beginning in the Big Bang. It became apparent towards the beginning of the 20th Century, thanks to the work of Edwin Hubble and others, that space is indeed expanding, and the galaxies are moving ever further apart. Logically, therefore, at a much earlier time, the universe was much smaller, and ultimately concentrated in a single point or singularity, which we call the Big Bang. Thus, the universe does seem to have some intrinsic (outward) directionality. In our everyday lives, however, we are not physically conscious of this movement, and it is difficult to see how we can perceive the expansion of the universe as an arrow of time.
The cosmological arrow of time may be linked to, or even dependent on, the thermodynamic arrow, given that, as the universe continues to expand and heads towards an ultimate “Heat Death” or “Big Chill”, it is also heading in a direction of increasing entropy, ultimately arriving at a position of maximum entropy, where the amount of usable energy becomes negligible or even zero. This accords with the Second Law of Thermodynamics in that the overall direction is from the current semi-ordered state, marked by outcroppings of order and structure, towards a completely disordered state of thermal equilibrium. What remains a major unknown in modern physics, though, is exactly why the universe had a very low entropy at its origin, the Big Bang.
It is also possible – although less likely according to the predictions of current physics – that the present expansion phase of the universe could eventually slow, stop, and then reverse itself under gravity. The universe would then contract back to a mirror image of the Big Bang known as the “Big Crunch” (and possibly a subsequent “Big Bounce” in one of a series of cyclic repetitions). As the universe contracts and collapses, entropy will in theory start to reduce and, presumably, the arrow of time will reverse itself and time will effectively begin to run backwards. In this scenario, then, the arrow of time that we experience is merely a function of our current place in the evolution of the universe and, at some other time, it could conceivably change its direction.
However, there are paradoxes associated with this view because, looked at from a suitably distant and long-term viewpoint, time will continue to progress “forwards” (in some respects at least), even if the universe happens to be in a contraction phase rather than an expansion phase. So, the cosmic asymmetry of time could still continue, even in a “closed” universe of this kind.
Radiative Arrow of Time
Waves, like light, radio waves, sound waves, water waves, etc, are always radiative and expand outwards from their sources. While theoretical equations do allow for the opposite (covergent) waves, this is apparently never seen in nature. This asymmetry is regarded by some as a reason for the asymmetry of time.
It is possible that the radiative arrow may also be linked to the thermodynamic arrow, because radiation suggests increased entropy while convergence suggests increased order. This becomes particularly clear when we consider radiation as having a particle aspect (i.e. as consisting of photons) as quantum mechanics suggests.
Quantum Arrow of Time
The whole mechanism of quantum mechanics (or at least the conventional Copenhagen interpretation of it) is based on Schrödinger’s Equation and the collapse of wave functions, and this appears to be a time-asymmetric phenomenon. For example, the location of a particle is described by a wave function, which essentially gives various probabilities that the particle is in many different possible positions (or superpositions), and the wave function only collapses when the particle is actually observed. At that point, the particle can finally be said to be in one particular position, and all the information from the wave function is then lost and cannot be recreated. In this respect, the process is time-irreversible, and an arrow of time is created.
Some physicists, including the team of Aharonov, Bergmann and Lebowitz in the 1960s, have questioned this finding, though. Their experiments concluded that we only get time-asymmetric answers in quantum mechanics when we ask time-asymmetric questions, and that questions and experiments can be framed in such a way that the results are time-symmetric. Thus, quantum mechanics does not impose time asymmetry on the world; rather, the world imposes time asymmetry on quantum mechanics.
It is not clear how the quantum arrow of time, if indeed it exists at all, is related to the other arrows, but it is possible that it is linked to the thermodynamic arrow, in that nature shows a bias for collapsing wave functions into higher entropy states versus lower ones.
Weak Nuclear Force Arrow of Time
Of the four fundamental forces in physics (gravity, electromagnetism, the strong nuclear force and the weak nuclear force), the weak nuclear force is the only one that does not always manifest complete time symmetry. To some limited extent, therefore, there is a weak force arrow of time, and this is the only arrow of time which appears to be completely unrelated to the thermodynamic arrow.
The weak nuclear force is a very weak interaction in the nucleus of an atom, and is responsible for, among other things, radioactive beta decay and the production of neutrinos. It is perhaps the least understood and strangest of the fundamental forces. In some situations the weak force is time-reversible, e.g. a proton and an electron can smash together to produce a neutron and a neutrino, and a neutron and a neutrino smashed together CAN also produce a proton and an electron (even if the chances of this happening in practice are very small). However, there are examples of the weak interaction that are time-irreversible, for example the case of the oscillation and decay of neutral kaon and anti-kaon particles. Under certain conditions, it has been shown experimentally that kaons and anti-kaons actually decay at different rates, indicating that the weak force is not in fact time-reversible, thereby establishing a kind of arrow of time.
It should be noted, though, that this is not such a strong or fundamental arrow of time as the thermodynamic arrow (the difference is between a process that could go either way but in a slightly different way or at a different rate, and a truly irreversible process – like entropy – that just cannot possibly go both ways). Indeed, it is such a rare occurrence, so small and barely perceivable in its effect, and so divorced from any of the other arrows, that it is usually characterized as an inexplicable anomaly.
Causal Arrow of Time
Although not directly related to physics, causality appears to be intimately bound up with time’s arrow. By definition, a cause precedes its effect. Although it is surprisingly difficulty to satisfactorily define cause and effect, the concept is readily apparent in the events of our everyday lives. If we drop a wineglass on a hard floor, it will subsequently shatter, whereas shattered glass on the floor is very unlikely to subsequently result in an unbroken held wine glass. By causing something to happen, we are to some extent controlling the future, whereas whatever might do we cannot change or control the past.
Once again, though, the underlying principle may well come back to the thermodynamic arrow: while disordered shattered glass can easily be made out of a well-ordered wineglass, the reverse is much more difficult and unlikely.
Psychological Arrow of Time
A variant of the causal arrow is sometimes referred to as the psychological or perceptual arrow of time. We appear to have an innate sense that our perception runs from the known past to the unknown future. We anticipate the unknown, and automatically move forward towards it, and, while we are able to remember the past, we do not normally waste time in trying to change the already known and fixed past.
Stephen Hawking has argued that even the psychological arrow of time is ultimately dependent on the thermodynamic arrow, and that we can only remember past things because they form a relatively small set compared to the potentially infinite number of possible disordered future sets.
Some thinkers, including Stephen Hawking again, have pinned the direction of the arrow of time on what is sometimes called the weak anthropic principle, the idea that the laws of physics are as they are solely because those are the laws that allow the development of sentient, questioning beings like ourselves. It is not that the universe is in some way “designed” to allow human beings, merely that we only find ourselves in such a universe because it is as it is, even though the universe could easily have developed in a quite different way with quite different laws.
Thus, Hawking argues, a strong thermodynamic arrow of time is a necessary condition for intelligent life as we know it to develop. For example, beings like us need to consume food (a relatively ordered form of energy) and convert it into heat (a relatively disordered form of energy), for which a thermodynamic arrow like the one we see around us is necessary. If the universe were any other way, we would not be here to observe it.