Category Archives: Psychology

How Do Leaders And Influencers Emerge? (Psychology)

New research by UTS economist Associate Professor David Goldbaum suggests influential leaders emerge from an evolutionary social process that has less to do with skills and talent than we might think.

We think of leaders and influencers as imbued with special skills and qualities – either innate or hard-won merit – that propels them to success, high status and financial rewards. Self-help books on how to build leadership skills abound. 

However, new research that models the evolution of social networks suggests it is less about individual skills and talents, and more about a dynamic self-reinforcing social process – one that is driven by our instinct to conform to those around us, as well as to seek influence. 

Computer modelling by economist Associate Professor David Goldbaum from UTS Business School reveals that even when everyone in a group has exactly the same attributes, a leader will still emerge from the process. The study, The origins of influence, was recently published in the journal Economic Modelling

“The findings suggest our view of leadership is over-glorified. It invites a rethink of the notion that a person who gains a leadership position through a competitive process is necessarily more worthy. This is especially so in subjective fields such as art, music, politics or fashion,” said Associate Professor Goldbaum. 

“A leader is someone who has followers – something they may or may not directly control. My aim was to build a model that stripped away any unique attributes, to see if a leader will still emerge,” he said.

“Those who are interested in becoming leaders and influencers would do better to understand the landscape of the popularity game they are playing, than to focus on individual traits.”

— Associate Professor David Goldbaum

To do the analysis Associate Professor Goldbaum developed a computer simulation populated with identical ‘agents’ all employing the same rules of behaviour to govern their decisions.

They could either act autonomously or imitate one another. They could not campaign or persuade others but were rewarded for doing what is popular and they received a premium for being ahead of the crowd. 

Associate Professor Goldbaum let the simulation run thousands of times to see what would happen. In the beginning the actions were random and uncoordinated, but over time the agents, responding to the payoffs, learned to coordinate and began to organise, and a leader emerged from the process. 

While the model is an extreme – in the real world there are numerous negotiations going on – it does reveal that it can be less important who the leader is, than the fact that the group accepts that one person will come out ahead and organises behind them. 

“How you get to be the eventual leader is that you slowly build up influence, and as you build up influence, others see that popularity and decide to join the group. It’s a self-reinforcing process – a snowball effect,” said Associate Professor Goldbaum. 

social media influencer graphic
Influencers benefit from a system that rewards early success in gaining followers. Image: Pixabay

“We think of leaders as winners – as though there was a tournament, and they were the best. The simulation is tournament like – because somebody emerges as a leader – but they have not done anything special. They just benefit from a system that rewards early success in gaining followers. 

“Those who are interested in becoming leaders and influencers would do better to understand the landscape of the popularity game they are playing, than to focus on individual traits,” he said.

The findings also help explain why leaders emerge in a group. Our desire to conform and follow allows society to function more smoothly and predictably – for example the roads would be chaos if everyone created their own rules. And a leader aids this process by coordinating everyone. 

And while the whole population benefits from the emergence of a leader, next to the leader, it is the early followers that benefit the most. Through their actions, early followers influence the social evolution, which changes the course of what happens. 

For example, a music promoter’s early backing of a new band helps the band gain more fans, bringing greater financial success to both. Or an art collector acquiring avant-garde art raises the artist’s profile such that museums and galleries take notice, which increases the value of the art. 

avant garde art gallery
An art gallery that aquires avant-garde art raises the artist’s profile. Image: Pixabay

Adjusting the model to allow for individual differences shows that it is possible to have some influence on the outcome. An agent advantaged with a larger social network than others at the start has a greater chance of becoming a leader, but there is no guarantee of success. Sometimes an agent with fewer connections will still emerge a leader. 

“The model demonstrates that while skill, knowledge, or leadership qualities are possible factors in becoming a leader, just because someone is a leader doesn’t mean they possess those qualities. You can become an accidental guru.” 

Featured image credit: Pixabay


Provided by UTS

Crawling Important Step In Development of Risk Perception (Psychology)

The more crawling experience a baby has, the more likely they are to avoid falling into water, a University of Otago study shows.

Published in Infancy, the work is part of a longitudinal study into the effect locomotor experience has on infants’ avoidance of falling over sudden drop-offs.

Dr Carolina Burnay image
Dr Carolina Burnay © University of Otago

Lead author Dr Carolina Burnay, of the School of Physical Education, Sport and Exercise Sciences, says the researchers tested babies’ behaviour around a tub filled with water, termed a water drop-off.

“The main difference between the babies that fell and those who avoided falling in the water was the amount of crawling experience they had.

“A very interesting result was that the amount of prior crawling experience they had informed their perception of the risk and behaviour even when they were already walking – hence it seems very helpful for babies to crawl and explore their environment,” she says.

The findings go against the contemporary tendency to “helicopter parent”.

“Caregivers should be aware of the important role crawling plays in infant development and the benefits of promoting crawling opportunities for their infants. By touching the floor and looking closely to it, infants learn to distinguish safe from unsafe surfaces to locomote and start avoiding falls, into the water or not.

“Over-protecting babies by limiting their opportunities to self-locomote does not keep them safe, instead, it delays their development of the perception of risky situations.”

Dr Burnay has also conducted a study into how babies interact with a slope leading to water.

The study, just published in Developmental Psychobiology, allowed babies to move into the water down a gradual slope, similar to a beach leading to the ocean. In this case, locomotor experience had no impact on babies’ behaviour – they were more likely to engage in risky behaviour on the slope compared to the drop-off.

“Before these studies, we knew statistics about drowning among babies, numbers like how many babies drown every year, how many drowning incidents occur in beaches or swimming pools, and what ages are the most represented in drowning statistics. This new approach is investigating how infants relate with bodies of water, when and how they start perceiving the risk and avoiding drowning.

“If we want to develop better strategies to prevent drowning among young children, we need to understand how they interact with bodies of water and how they learn to perceive the consequences that interacting with bodies of water can impose,” Dr Burnay says.

The study also highlights the risk slopes into bodies of water pose to babies. Parents and those working in water safety should have increased vigilance around such accessways and prevent infants’ access to them in aquatic environments.

Dr Burnay is continuing her studies into how babies interact with bodies of water and is seeking participants (crawlers or walkers aged under 18-months) for testing at Moana Pool in Dunedin.

The babies tested on the water cliff were from Portugal, while those tested on the water slope were from Dunedin. To determine if the different findings are the impact of cultural difference, she is testing babies in both situations.

Publication details:

Experienced crawlers avoid real and water drop-offs, even when they are walking
Carolina Burnay, Rita Cordovil, Chris Button, James L. Croft, David I. Anderson
Infancy

AND

Do infants avoid a traversable slope leading into deep water?
Carolina Burnay, Chris Button, Rita Cordovil, David I. Anderson, James L. Croft
Developmental Psychobiology

Featured image credit: istock


Provided by University of Otago

Study Sheds New Light On Behaviour Called Joint Attention (Psychology)

Scientists have shed new light on a human behaviour called joint attention – the ability for two or more people to share attention about something in the world around us.

For instance, a child and mother may both see a beautiful butterfly, then look to each other to share a smile about the butterfly, so without any words they know they have seen the butterfly ‘together’. 

Some experts have argued that engaging in joint attention underpins human cooperation and it has been suggested that joint attention might represent a key species-difference between humans and other great apes. 

It’s an ability that doesn’t emerge until infants are 9-12 months old and scientists still don’t know if any other species can do it. Scientists say it may also be important in language acquisition, with children connecting words with objects to which they and another individual are jointly attending.

Behaviour

Given the importance of joint attention, psychologists at the universities of York and St Andrews wanted to better understand how to measure the behaviour in young infants who cannot yet talk.

Other scientists had previously suggested that the quality of look given by a child to an adult could be reliably identified by third party observers and the presence of ‘sharing’ rather than ‘checking’ looks were sufficient to distinguish joint attention from the child looking at the adult for other reasons (e.g. to monitor them). In this study this previous claim was rigorously tested and challenged.

As part of the study, they asked participants to watch videos of infants looking at their mothers and decide if the looks were sharing or checking looks. 

Overall, the study revealed low agreement among raters in assigning looks from infants to their mothers, challenging the idea that the quality of infant looks can be reliably distinguished as a marker of joint attention. 

Perspective

Dr Kirsty Graham, from the University of St Andrews, said: “Our participants didn’t agree very well on the types of look, suggesting that it’s really hard to tell whether joint attention is happening from this third-party perspective if you just consider the look itself.”

The study authors suggest that to understand the development of joint attention in humans and to search for it in other species, we have to take an objective approach in measuring observable behaviour, rather than subjective judgements. 

Professor Katie Slocombe, from the University of York’s Department of Psychology, added:

“These results give a clear steer for how we need to identify joint attention events between preverbal infants and adults.

“Having a rigorous, objective way of identifying joint attention events will be key for the next steps in our research, as we investigate whether other species engage in joint attention events and whether joint attention develops in a uniform way in diverse human cultures.”

Featured image: Engaging in joint attention underpins human cooperation, some experts argue © University of York


About this research

The paper called, “Detecting joint attention events in mother-infant dyads: Sharing looks cannot be reliably identified by naïve third-party observers”  is published in the journal PLOS ONE.


Provided by University of York

New Insights Into the Relationship Between How We Feel And Our Views On Aging (Psychology)

A new study finds that the disconnect between how old we feel and how old we want to be can offer insights into the relationship between our views on aging and our health.

Subjective age discordance (SAD) – the difference between how old you feel and how old you would like to be—is a fairly new concept in the psychology of aging. However, the work to this point has used SAD to look at longitudinal data and how people’s views on aging evolve over months or years.

“We wanted to see whether SAD could help us assess day-to-day changes in our views on aging, and how that may relate to our physical health and well-being,” says Shevaun Neupert, co-author of the study and a professor of psychology at North Carolina State University.

SAD is determined by taking how old you feel, subtracting how old you would like to be and then dividing it by your actual age. The higher the score, the more you feel older than you want to be.

For this study, researchers enrolled 116 adults aged 60-90 and 107 adults aged 18-36. Study participants filled out an online survey every day for eight days. The survey was designed to assess how old participants felt each day, their ideal age, their positive and negative mood over the course of the day, any stresses they experienced, and any physical complaints, such as backaches or cold symptoms.

“We found that both older adults and younger adults experienced SAD,” Neupert says. “It was more pronounced in older adults, which makes sense. However, it fluctuated more from day to day in younger adults, which was interesting.”

“We think younger adults are getting pushed and pulled more,” says Jennifer Bellingtier, first author of the paper, and a researcher at Friedrich Schiller University Jena. “Younger adults are concerned about negative stereotypes associated with aging, but may also be dealing with negative stereotypes associated with younger generations and wishing they had some of the privileges and status associated with being older.”

Two additional findings stood out.

“On days when the age you feel is closer to your ideal age, people tend to have a more positive mood,” Bellingtier says. “And, on average, people who have more health complaints also had higher SAD scores.”

Neither finding was surprising, but both show the value of the SAD concept as a tool for understanding people’s views on age and aging. It may also offer a new approach for the way we think about aging and its impacts on health.

“Previous research has found that how old you feel can affect your physical and mental well-being, and interventions to address that have focused on trying to make people feel younger,” Neupert says.

“That approach is problematic, in that it effectively encourages ageism,” says Bellingtier. “Our findings in this study suggest that another approach to improving well-being would be to find ways to reduce this subjective age discordance. In other words, instead of telling people to feel young, we could help people by encouraging them to raise their ‘ideal’ age.”

The paper, “Daily Experiences of Subjective Age Discordance and Well-Being,” is published in the journal Psychology and Aging.


Reference: Jennifer A. Bellingtier et al, Daily experiences of subjective age discordance and well-being., Psychology and Aging (2021). DOI: 10.1037/pag0000621


Provided by North Carolina State University

Why Are Narcissists So Easily Bored? (Psychology)

New research examines the tendencies of narcissists to become bored.

KEY POINTS

  • No one likes to be bored, but for people high in narcissism, it can be almost intolerable.
  • New research explores the connection between boredom, narcissism, and an excessive need for smartphone use.
  • By understanding the factors that lead narcissists to become bored, one can gain better insight into how to manage relationships with them.

With the many sources of stimulation in a highly digitized world, it may be difficult to imagine how anyone can become bored. After all, there’s always some new message or text to check, endless choices of streaming shows and movies, and a constant drumbeat of information about everything from the latest COVID-19 statistics to celebrity scandals. However, because they need a flow of constant attention and admiration, people high in narcissism would seem to be particularly likely to experience this “blah” mental state.

Perhaps you have a cousin who, for as long as you can remember, demanded extra attention and reassurance. This cousin would become enraged and upset when other relatives focused on the younger children in the family. At a recent wedding, with the entire family in attendance, this cousin appeared agitated and ran to the bathroom, remaining there for most of the night. This debacle was nothing new, as the cousin had a long history of upending events ranging from funerals to baby showers when being forced to remain still or quiet while other people stole the glory.

When most people are bored, they manage to find ways to entertain themselves, even if it just means twiddling their thumbs. For people like your cousin, filled with insecurity, a period of time requiring patience can border on mental torment. Left with their own thoughts or, worse, the feeling that other people are ignoring them, they find ways to try to make up the void.

The idea that people whose need for relief from boredom reflects a form of narcissism served as the inspiration for University of Kentucky’s Albert Ksinan and colleagues’ (2021) study on the compulsive use of smartphones. According to Ksinan and his fellow authors, previous research suggests that narcissists “might use smartphones to access social media, where they can curate and present their preferred self-image.” On the other hand, maybe they do so, the authors suggested, because they’re bored.

Testing the Boredom-Narcissism Relationship

Apart from the study’s goal of examining smartphone use by narcissists, the U. Kentucky-led research provides valuable insights into boredom as a feature of the daily life of people who need constant admiration and attention. Ksinan and his fellow researchers decided to focus on the age range they thought would be most likely to engage in problematic smartphone use. The online sample of 532 young adults (average age 23 years old), completed standard questionnaires assessing narcissism, compulsive smartphone use, and boredom.

The narcissism questionnaires assessed grandiose narcissism with items such as “I prefer to be the center of attention” vs. “I prefer to blend in with the crowd.” The measure of vulnerable narcissism included items such as “I dislike being with a group unless I know that I am appreciated by at least one of those present.”

The instrument assessing boredom proneness (rated on a 7-point scale) includes such sample items as:

  1. It is easy for me to concentrate on my activities.
  2. Time always seems to be passing slowly.
  3. It takes more stimulation to get me going than most people.
  4. In situations where I have to wait, such as a line, I get very restless.
  5. I am often trapped in situations where I have to do meaningless things.
  6. It takes a lot of change and variety to keep me really happy.

How did you score on these items? The average among the study sample was just 3.00, with most participants scoring between 2 and 4; a higher score than this would suggest that you’re constantly looking for excitement. In terms of the study’s purpose, you can also see how someone who feels that “vacuum” described by the authors would constantly be looking for ways to fill it up.article continues after advertisementnull

Turning now to the study’s findings, the authors reported that, as they predicted, people scoring high on both narcissism subscales had higher compulsive smartphone use scores (as indexed by items such as “Others complain about me using my mobile phone too much”). However, boredom served to play an important mediating role, at least for those high on the vulnerable narcissism scale. The link between smartphone use and vulnerable narcissism, in other words, was accounted for statistically by boredom scale scores. As the authors concluded, “vulnerable narcissists tend to suffer from feelings of boredom, and they seem to use smartphones as an easy fix to counter the negative feelings stemming from boredom.”

Based on the findings, smartphone use for grandiose narcissists seems to stem from a different need than an attempt to alleviate boredom. For these more gregarious individuals who like to show off on social media, the smartphone becomes an expression of their need to be in the limelight.

Beyond Boredom in Understanding Narcissism

Returning now to the case of that relative of yours, think back on what you believe causes all that distress when other people are the focus of attention. If you see this behavior as an outgrowth of vulnerable narcissism, you may have a better understanding of how to understand and manage your future interactions. Although you may still find the behavior to be annoying, if not upsetting, you can at least gain perspective on what’s behind it. Rather than trying to dominate others, this person is just trying to feel whole inside.

Consider, also, what it’s like when a vulnerable narcissist seeks that validation through constant checking of social media. It must be a tough process indeed when those “likes,” hearts, and comments don’t come flooding in with each post. Seeking validation when validation isn’t there can only become the source of even more insecurity.

To sum up, boredom alone can’t explain all the behavior of a narcissist, even a vulnerable one. However, you can gain important insights to help those in your life find greater fulfillment by allowing their true selves to shine through.

Featured image credit: Gettyimages


Reference

Ksinan, A. J., Mališ, J., & Vazsonyi, A. T. (2021). Swiping away the moments that make up a dull day: Narcissism, boredom, and compulsive smartphone use. Current Psychology: A Journal for Diverse Perspectives on Diverse Psychological Issues, 40(6), 2917–2926. doi: 10.1007/s12144-019-00228-7


Provided by Psychology today


Text credit: Susan Krauss Whitbourne

Why Are So Many Movies Basically the Same? (Psychology)

People may like them because they produce evolved pleasurable responses.

KEY POINTS

  • An analysis of 1700 books found that most follow just one of six emotional story arcs.
  • A recent paper argues we like certain stories because they activate evolved cognitive mechanisms.
  • People are interested in stories involving overcoming obstacles because they evolved to learn from others.
  • People derive pleasure from seeing (fictional) others succeed, a mechanism that likely helped to promote cooperation.

What do Star Wars and Harry Potter have in common? In addition to having their own theme parks, quite a lot. Both feature orphaned protagonists who discover they have special powers and eventually go on to save the world. In fact, both stories follow a common template that can be found repeatedly in fiction, known as “the hero’s journey” or the monomyth.

In a previous post, I wrote about how true originality in creative work is rare. But what draws us to the same basic stories over and over again? Two recent journal articles offer a possible answer.

There are only about six basic stories

The first, published in 2016 by a group of researchers led by Andrew Reagan at the University of Vermont, analyzed about 1700 books downloaded from Project Gutenberg. The researchers used an AI method called sentiment analysis that assigns an estimate of the emotion expressed in a section of text. For example, the sentence “I’m super excited about my birthday tomorrow!” would be classified as highly positive. They constructed sentiment profiles of each book from start to finish to get a rough sense of their emotional arcs: Do they start happy and end sad, start sad and end happy, or something else? Then they used some statistical methods to see how many different profiles could be found in the set of books.

Most books fell into six basic categories, which they called:

  1. Rags to riches (start sad, end happy)
  2. Tragedy (start happy, end sad)
  3. Man in a hole (start happy, get sad, end happy)
  4. Icarus (start sad, get happy, end sad)
  5. Cinderella (start sad, get happy, get sad, end happy)
  6. Oedipus (start happy, get sad, get happy, end sad)
Reagan et al. (2016).
The six basic stories. Source: Reagan et al. (2016).

Somewhat surprisingly, even though rags to riches and tragedy stories were most common, man in a holeIcarus, and Oedipus stories were slightly more popular, based on download numbers.

This study supports the idea that most stories are, at their core, pretty much the same. But it doesn’t explain why that is, or why people prefer certain types of stories. There’s no reason why someone couldn’t write a riches to riches story in which everyone is happy from beginning to end, but it seems intuitively obvious that that story would be pretty boring.

A new theory argues we were born to like certain stories

An article published this month by anthropologist Manvir Singh argues that what makes stories like Star Wars and Harry Potter so appealing is what Singh calls their “sympathetic plots.”

The sympathetic plot, which is shared by countless stories across cultures, has several core features that are not unlike those of the hero’s journey: A hero has a justifiable goal, faces an obstacle, overcomes it, and earns a reward for themselves or for others.

Singh argues that the reason sympathetic plots are so appealing is that they hijack evolutionarily developed mechanisms for social learning. One of the most valuable skills humans have that distinguishes us from other animals is the ability to learn from others: Rather than having to figure out how to fix a running toilet on your own, you can watch a couple of other people do it on YouTube and you’re pretty much up to speed. As a result, Singh argues, we’re fascinated when we see people encounter obstacles, and we feel some pleasure when we learn how they overcame them. Of course, learning how Luke Skywalker destroyed the Death Star is completely useless knowledge for us, but this social learning mechanism is so deeply ingrained, watching him do it produces the same pleasure response.

Consider an alternative explanation for why we like fiction: It allows us to put ourselves in the roles of the protagonists to simulate solving similar problems. But again, we’ll never need to know how to use The Force or fight goblins. Instead, Singh argues that the pleasure we experience hearing these stories is just a side effect of our interest in learning how people overcome obstacles in general, even fictional ones.

We’re happy when people we like succeed

Another relevant mechanism is what Singh calls “sympathetic joy”: We feel happy when others succeed. Some researchers believe that sympathetic joy evolved to help motivate cooperation: If helping others succeed will make you feel good, this ought to motivate you to help people and vice versa.

These two mechanisms can help explain why a riches to riches story would be so boring: no obstacles to overcome, no success to be had. In other words, it would fail to activate either mechanism.

What about tragic endings?

But not all popular stories follow the sympathetic plot template. Recall that some of the most popular story arcs identified in the analysis of Project Gutenberg books ended in tragedy (so-called Icarus and Oedipus stories). Stories like these would, at the very least, seem to fail to activate any sympathetic joy.

As Singh points out, these evolutionary mechanisms are not the only factors that affect how enjoyable a story is. People may be drawn to how thought-provoking a story is, whether it was written or filmed in a unique way, or whether the ending, while tragic, provided some sort of closure. That is, the existence of the social learning and sympathetic joy mechanisms can explain why so many stories are so similar, but their existence doesn’t mean that every story in history will conform to the sympathetic plot.

None of this of course means that you can write an award-winning novel by following a simple formula (as if rags to riches or “overcoming obstacles” are helpful formulas anyway). Instead, these articles suggest that successful writers have converged on a common storytelling structure that they fill with rich and compelling characters, events, and relationships. As I argued in my earlier post, it doesn’t really matter when we see something familiar in a new way because it can still be enjoyable. Clearly, this is true: People have been consuming essentially the same stories for hundreds of years.

Text and featured image credit: Alan Jern/wallpaperflare


References

  • Reagan, A.J., Mitchell, L., Kiley, D. et al. (2016). The emotional arcs of stories are dominated by six basic shapes. EPJ Data Sci., 5, 31.
  • Singh M. (2021). The Sympathetic Plot, Its Psychological Origins, and Implications for the Evolution of Fiction. Emotion Review, 13(3), 183-198.

Provided by Psychology today

Why Have So Many Leaders Screwed Up the Return to the Office? (Psychology)

KEY POINTS

  • Most workers would prefer to work from home at least half of the time, surveys show, yet many employers are forcing employees back to the office.
  • Cognitive biases, such as wanting to return to the status quo or envisioning a false consensus, may be hampering leaders’ decisions.
  • Work-from-home functions well for the vast majority of people. A hybrid model with remote work for those who want it may be the best solution.

Due to strong employee resistance and turnover, Google recently backtracked from its plan to force all employees to return back to the office and allowed many to work remotely. Apple’s plan to force its staff back to the office has caused many to leave Apple and led to substantial internal opposition.

Why are these and so many other leaders forcing employees to return to the office? They must know about the extensive, in-depth surveys from early spring 2021 that asked thousands of employees about their preferences on returning to the office after the pandemic.

All of the surveys revealed strong preferences for working from home post-pandemic at least half the time for over three-quarters of all respondents. A quarter to a third of all respondents desired full-time remote work permanently. From 40 to 55 percent of respondents said they’d quit without permanent remote options for at least half the work week; of these, many would leave if not permitted fully remote work. Minority employees expressed an especially strong preference for remote work to escape in-office discrimination.

Yet many employers intend to force their employees who can easily work remotely back to the office for much or all of the work week.

Leaders frequently proclaim that “people are our most important resource.” Yet the leaders resistant to permitting telework are not living by that principle. Instead, they’re doing what they feel comfortable with, even if it devastates employee morale, engagement, and productivity, and seriously undercuts retention and recruitment, as well as harming diversity and inclusion. In the end, their behavior is a major threat to the bottom line.

The tensions of returning to the office and figuring out the most effective permanent post-pandemic work arrangements are the topic of my book, Returning to the Office and Leading Hybrid and Remote Teams: A Manual on Benchmarking to Best Practices for Competitive Advantage. This article focuses on the blindspots causing leaders to make bad decisions on these topics.

Why Are So Many Leaders Wary of Remote Work?

A large number of leaders want to return to what they saw as “normal” work life. By that, they mean turning back the clock to January 2020, before the pandemic.

Another key concern for many involves personal discomfort. They like the feel of a full, buzzing office. They prefer to be surrounded by others when they work.

Other reasons involve challenges specifically related to remote work. This includes deteriorating company culture and growing work-from-home burnout and Zoom fatigue. Other leaders cited a rise in team conflicts and challenges in virtual collaboration and communication. A final category of concerns relates to a lack of accountability and effective evaluation of employees.

Mental Blindspots Leading to Disastrous Telework Decisions

Why are these leaders resistant to the seemingly obvious solution: a hybrid model for most, with full-time permanent remote work for those who both want it and show high effectiveness and productivity? This is because of cognitive biases, which are mental blindspots that lead to poor strategic and financial decision-making.

Fortunately, by understanding these cognitive biases and taking research-based steps to address them, we can make the best decisions.

Many people feel a desire to go back to the world before the pandemic. They fall for the status quo bias, a desire to maintain or get back what they see as the appropriate situation and way of doing things.

A major factor in leaders wanting everyone to return to the office stems from their personal discomfort with work from home. They spent their career surrounded by other people. They want to resume regularly walking the floors, surrounded by the energy of staff working.

They’re falling for the anchoring bias. This mental blindspot causes us to feel anchored to our initial experiences and information.

The evidence that work from home functions well for the vast majority doesn’t cause them to shift their perspective in any significant manner. The confirmation bias offers an important explanation for this seeming incongruity. Our minds are skilled at ignoring information that contradicts our beliefs and at looking only for information that confirms them.

Reluctant leaders usually tell me they don’t want to do surveys because they feel confident that the large majority of their employees would rather work at the office than at home. They wave aside the fact that the large-scale public surveys show the opposite. For instance, one of the major complaints by Apple employees is a failure to do effective surveys and listen to employees.

In this refusal to do surveys, the confirmation bias is compounded by another cognitive bias, called the false consensus effect. This mental blindspot leads us to envision other people in our in-group — such as those employed at our company — as being much more like ourselves in their beliefs than is the actual case.

What about the specific challenges these resistant leaders brought up related to working from home, ranging from burnout to deteriorating culture and so on? Further inquiry on each problem revealed that the leaders never addressed these work-from-home problems strategically.

They transitioned to telework abruptly as part of the March 2020 lockdowns. Perceiving this shift as a very brief emergency, they focused, naturally and appropriately, on accomplishing the necessary tasks of the organization. They ignored the social and emotional glue that truly holds companies together, motivates employees, and protects against burnout.

That speaks to a cognitive bias called functional fixedness. When we have a certain perception of how systems should function, we ignore other possible functions, uses, and behaviors. We do this even if these new functions, uses and behaviors offer a better fit for a changed situation and would address our problems better.

Conclusion

The post-pandemic office will require the realignment of employer-employee expectations. Leaders need to use research-based strategies to overcome the gut reactions that cause them to fall victim to mental blindspots. Only by doing so can they seize the competitive advantage from using their most important resource effectively to maximize their retention, recruitment, morale, productivity, workplace culture, and thus their bottom line.

Text/Featured image credit: Gleb Tsipursky/gettyimages


References

Tsipursky, G. (2021). Returning to the Office and Leading Hybrid and Remote Teams: A Manual on Benchmarking to Best Practices for Competitive Advantage. Columbus, OH: Intentional Insights Press.


Provided by Psychology today

How Do Babies Know What’s Alive? (Psychology)

Here’s what research says about how babies navigate the world of the living.

KEY POINTS

  • Babies must decide whether a person or object is alive when exploring if and how to communicate with it.
  • Human features of living things include faces, motion, and goal-directed behavior.
  • Non-human features of living things include self-propelled motion and the ability to respond to communication.

When my son Edwin was about three months old, he started talking. Not with real words of course, or any language that is understandable, but when I talked to him, he suddenly started responding by using babbles or raspberries or whatever sound he could muster.

What’s interesting is that these first attempts at communication weren’t just directed at me, but they were also directed at other people, and in some cases, inanimate objects, like the stuffed sheep he had in his crib. In fact, he’d wake up every morning and babble to his sheep while he waited for me to come get him, as if he was chatting about the weather or about what he planned to do that day.

How did he decide who or what to interact with? How do babies figure out what’s alive and what’s not?

Human Features of Living Things

From a very early age, there is evidence that babies pay a lot of attention to some important features of living things that might make learning about what’s alive and what’s not a bit easier. First, even from birth infants have a bias to look at things that have faces (Johnson & Morton, 1991), and they develop a preference for their mothers’ face within the first two months of life (Maurer & Salapatek, 1976).

There is also evidence that newborns prefer to look at things that move like people—what researchers call biological motion—over things that move mechanically (Simion, Regolin, & Buff, 2008). For example, 4- to 12-month-olds prefer to look at videos of animals over things like cars, boats, and helicopters. They also direct more emotional responses—almost all of them positive—to the animals, often smiling, laughing or waving at them (DeLoache, Pickard, & LoBue, 2011), and talk to animals more than they talk to toys (LoBue, Bloom Pickard, Sherman, Axford, & DeLoache, 2013).

Based on this research, you might not find it surprising that having a face is a particularly important cue about what’s alive, which is likely why Edwin judged his stuffed sheep to be an appropriate conversation partner. For example, in a clever set of studies, researchers presented infants with a brown fuzzy stuffed blob about the size of a small dog. The blog either had a face or no face, and researchers found that infants would shift their attention to whatever direction the blob was facing if it had a face, suggesting that faces are a cue for whether an object is alive, and whose gaze might be an important cue for where to look next (Johnson, Slaughter, & Carey, 1998).

Other human-like parts might also be a cue for when something is alive, like hands or even legs (Rakison & Butterworth, 1998). To illustrate, another group of researchers showed 5- to 9-month-old babies two toys, a ball and a bear, and they watched as a person’s hand reached for one of the objects. After watching the person reach for the same object over and over again, the researchers switched the location of the two objects, and the person now reached for the same object in a different location as before, or a different object in the same location as before.

The researchers found that the babies were surprised (and looked longer) when they saw the person reach for the new object, not the new location—a finding that the researchers interpreted as evidence that babies were paying the most attention to the person’s intended goal. Importantly, babies didn’t show this response when they saw the same series of events with an inanimate stick instead of a person reaching for the objects, suggesting that they knew that people—not sticks—can have goals in mind (Woodward, 1998).

Non-Human Features of Living Things

Although human-like characteristics are probably a really good cue for when something might be alive, they aren’t the only cues that babies can use. Indeed, there are lots of living things that aren’t human and don’t have features like faces, such as worms, snails, and jellyfish. How do babies know that these things are alive?

Researchers have shown that even when something doesn’t look human and doesn’t have a face, babies can use self-propelled motion—or objects that move by themselves—as another cue for animacy. Using a similar study design as the one I just described, where a human hand or a stick reached repeatedly for an object, researchers found that 9- to 12-month-old babies expected the stick to reach for a new object if they saw the stick move by itself first and attempt to pick up one of the objects (Biro & Leslie, 2006). This suggests that babies expect objects that move by themselves to have goals, even if they don’t have faces, hands, or feet.

One final feature that infants use to decide whether an object is alive is whether it responds when the infant talks to it. In another study using the brown fuzzy blob described above, experimenters programmed the blob to respond to babies every time they babbled: Whenever the baby babbled, the blob babbled back. When the blog responded to the baby, the baby shifted its gaze to look wherever the blog appeared to look, even if the blob didn’t have a face (Johnson, Slaughter, & Carey, 1998).

Bottom Line

Altogether, this research suggests that there are a lot of different cues babies can use to decide if something is alive. In reality, most things that are alive have many of these features all at once—they have faces, they respond to you, and they move by themselves. But the fact that babies can just use one or two of them creates the flexibility to decide that new animals like jellyfish might also be alive even though they don’t share all of the features that humans have. It also explains why just having a face might lead to some interesting conversations between your baby and some of his toys, or why that favorite stuffed animal can become such an important friend.

Text credit: Vanessa LoBue


References

  • Biro, S., & Leslie, A. M. (2007). Infants’ perception of goal‐directed actions: development through cue-based bootstrapping. Developmental science, 10(3), 379-398.
  • DeLoache, J. S., Pickard, M. B., & LoBue, V. (2011). How very young children think about animals. In P. McCardle, S. McCune, J. A. Griffin, & V. Maholmes (Eds.), How animals affect us: Examining the influence of human–animal interaction on child development and human health (pp. 85–99). Washington, DC: American Psychological Association.
  • Johnson, M. H., & Morton, J. (1991). Biology and cognitive development: The case of face recognition. Oxford, England: Basil Blackwell.
  • Johnson, S., Slaughter, V., & Carey, S. (1998). Whose gaze will infants follow? The elicitation of gaze-following in 12-month-olds. Developmental Science, 1(2), 233-238.
  • LoBue, V., Bloom Pickard, M., Sherman, K., Axford, C., & DeLoache, J. S. (2013). Young children’s interest in live animals. British Journal of Developmental Psychology, 31, 57-69.
  • Maurer, D., & Salapatek, P. (1976). Developmental changes in the scanning of faces by young infants. Child development, 523-527.
  • Rakison, D. H., & Butterworth, G. E. (1998). Infants’ use of object parts in early categorization. Developmental Psychology, 34(1), 49-62.
  • Simion, F., Regolin, L., & Buff, H. (2008). A predisposition for biological motion in the newborn baby. Proceedings of the National Academy of Sciences, 105, 809–813.
  • Woodward, A. L. (1998). Infants selectively encode the goal object of an actor’s reach. Cognition, 69(1), 1-34.

Provided by Psychology today

How to Change the Mind of the Most Stubborn Person You Know (Psychology)

Science reveals that this method works best to convince others to see your side.

KEY POINTS

  • Changing bizarre or entrenched beliefs requires a defined process to overcome change resistance.
  • Some persuasion methods work much better than others.
  • The type of evidence you use in persuasive arguments is crucial.
  • Change can be transitory or enduring based on how you approach the change initiative.

We all have stubborn friends, family, or colleagues who stubbornly hold opinions regardless of evidence to the contrary. Maybe they embrace conspiracy theories or tell you “I have always done it this way,” while harboring bizarre beliefs. Sometimes we get so frustrated by failed persuasion attempts that we feel like banging our heads against a wall, but there are better strategies that work! Humans are resistant to change when it means admitting that existing beliefs or strategies are, indeed, wrong. Fortunately, psychology research shows that some persuasion methods work better than others.

The persuasion process starts with knowing how others respond to “anomalous data” (Chinn & Brewer, 1993), meaning information that disputes their existing thinking. What if I told you that giving people bonuses to incentivize productivity doesn’t work? Unless you agreed, you would probably tell me to mind my business and defend your bonus beliefs. What if I presented you data that indicated performance decreases after a reward is received? Research supports my contention because reward anticipation, not attainment drives performance. Once the reward is received, performance wanes (Lepper et al., 1973). You might say your theory is better, or you might dispute the accuracy of the data I just presented, or maybe you would even say I am entitled to my opinion, but I am indeed wrong. In other words, you have beliefs and a theory about bonuses and so do I. My theory is different than yours and herein lies the age-old persuasion problem!

There are at least seven different ways that people respond to data that disputes their beliefs. Knowing which reason causes resistance helps tailor your persuasion effort, so be sure to figure out which one prevails.

Ignoring data occurs when individuals are highly committed to their own impressions and beliefs. This type of response is frequently observed when people completely discount recommendations. Rejecting data means individuals consider the merits of the information but neglect to change their theory or behaviors related to the topic. Holding data in abeyance is a deferral strategy, suggesting neither acceptance nor rejection of a different approach and signifies the intention to revisit the information later. Reinterpreting data and maintaining existing theory involves consideration of the ideas advanced. During reinterpretation, the information is closely scrutinized, but the individual concludes after evaluation that the information provided was flawed, unclear, or irrelevant, leaving existing beliefs intact. Reinterpretation and revision imply partial modification of one’s thinking based upon the information provided. Acceptance connotes a successful change effort.

The five change steps

Persuasion means overcoming resistance. While there is no secret formula, there are defined steps needed to enhance the probability of effectiveness. These steps are verified by evidence (Dole & Sinatra, 1998), meaning that the 5-step process has been tested and the essential elements described are generally more effective than other forms of haphazard persuasion.

The first step toward a successful change effort is raising awareness. One beneficial approach used to foster awareness is to create cognitive conflict about current approaches. This means that the individual who needs to change must have at least a small about of doubt about the efficacy of their approach. Without some doubt, change is unlikely.

The next step is persuading the individual that plausible alternatives exist. Plausibility means that, at a minimum, the individual is willing to consider an alternative strategy because the recommendation is understood, coherent, and relatively simple and because the proposal is deemed as a potential solution. Plausibility doesn’t mean acceptance, but it does mean the message is understood and could reasonably eliminate the doubt instilled in step one. This means verbally confirming that the plan you are proposing is practical and can work in practice.

Third, refutational evidence promotes the formation of different perspectives. Keep in mind the key role of beliefs. For change to stick you must give people a reason to question the accuracy of their current views and provide them with a compelling reason to make a change. Providing refutational evidence persuades individuals to believe that existing representations are flawed considering inconsistencies with evidence. By instigating doubt, the goal of refutation is to encourage the nonbeliever to relinquish an existing belief in favor of another.

Next, when the individual begins to doubt the merits of their existing view and see that there may be a good reason to change, we must provide relevant alternatives. Relevant implies that the individual perceives that the alternative recommendation as useful and can potentially solve the problem by eliminating the doubt created in step one. Individuals will have a higher probability of change and be more motivated to consider alternatives when the change effort satisfies their personal goals.

Finally, few of us can initiate radical change in isolation. We need help, support, and what psychologists call scaffolding. The most enduring change efforts are those that are conducted with the support of significant others. Assuming the factors of awareness, plausibility, refutational evidence, and personal relevance have been met, the individual is more likely to exhibit the motivation to adopt new approaches.


References

(1) Chinn, C. A., & Brewer, W. F. (1993). The role of anomalous data in knowledge acquisition: A theoretical framework and implications for science instruction. Review of Educational Research, 63, 1–49. (2) Dole, J. A., & Sinatra, G. M. (1998). Reconceptalizing change in the cognitive construction of knowledge. Educational Psychologist, 33(2–3), 109–128. (3) Lepper, M. R., Greene, D., & Nisbett, R. E. (1973). Undermining children’s intrinsic interest with extrinsic reward: A test of the “overjustification” hypothesis. Journal of Personality and Social Psychology, 28, 129–137.


Provided by Psychology today


Text credit: Bobby Hoffman