How much does cognitive bias change people's perception? Well, the history of computing would be a lot different.
How much does cognitive bias change people's perception? Well, the history of computing would be a lot different. And so would many major orchestras, who had to implement a curtain during auditions so that judges and orchestral directors could only judge musicians on their skills... and not their gender. Michael Li, PhD, is the founder of The Data Incubator, an education startup training STEM PhDs to be data scientists and quants.
When we don't know the reasons behind our choices, we confabulate.
In a now classic experiment, the psychologists Richard E Nisbett and Timothy Wilson at the University of Michigan laid out a range of items, such as pairs of stockings, and asked people to select one. Participants consistently preferred the items on their most right-hand side. But when they were asked to explain their choices, they did not mention the position of the items, and instead attributed their choices to the superior texture or colour of the chosen pair of stockings, even when the displayed pairs were all identical. People confabulated. Not knowing some of the factors that were determining their choices, they produced an explanation that was not based on evidence relevant to the factors determining their choices, but mentioned instead plausible reasons why the chosen item was better.
This type of behaviour is not confined to experimental situations. In our everyday lives, we often explain our choices earnestly, even if we don’t know some of the facts relevant to the reason we made those choices. When we offer an explanation, we propose some plausible argument for choosing the way we did. Suppose a panel shortlists two candidates for a job, and is assessing them after carefully considering their CVs and their performance at the interview. Most people on the panel express a strong preference for John (a white male) over Arya (a woman of colour).
When asked to explain their preferences, the panellists say that John has more experience than Arya, and performed more confidently at the interview. But actually, both candidates have the same amount of relevant job experience, and exhibited the same level of confidence at the interview. The panellists’ preference was the result of an implicit bias against women of colour. As the panellists are not aware of this bias, they lack information relevant to the factors determining their preference. They explain their preference by giving the sorts of reasons commonly accepted in a hiring context. The panellists in this scenario confabulate.
‘Confabulation’ comes from the Latin fabula (‘story’) which can be either a historical account or a fairytale. When we confabulate, we tell a story that is fictional, while believing that it is a true story. As we are not aware that our story is fictional, this is very different from a lie: we have no intention to deceive. So in confabulation there is a mismatch between what we aim to do (tell a true story) and what we end up doing (tell a fictional story). We tend to confabulate when we are asked to explain our choices because we don’t always know the factors responsible for our choices. Yet, when asked why we made a choice, we offer an explanation. The explanation can sound plausible, but is not grounded in the relevant evidence because it doesn’t take into account some of the factors determining our choices.
It seems obvious that confabulation is something we should avoid if we can. It is the result of ignorance and it further spreads misleading information about ourselves (eg, that we choose stockings based on their colour) and about the world (eg, that Arya was less confident than John at her job interview). Yet, counterintuitive as it might seem, confabulation can have benefits as well as costs. I suggest that when we confabulate rather than acknowledge ignorance, we construct a better image of ourselves; we integrate disparate information about ourselves into a coherent story; and we share information about ourselves with others.
Let’s consider each of these three effects in turn. By having an explanation for our choices rather than acknowledging ignorance, we enhance our private and public self-image. Despite our actual state of ignorance about the factors influencing our choices, we present ourselves as agents who know why they make the choices they make and who make choices for good reasons. If the research participants in the Nisbett and Wilson study hadn’t explained their choice of stockings, they would have given the impression of choosing randomly or of not being discerning customers. If the panellists hadn’t provided any reason for preferring John to Arya for the job, their preferences would not have been as authoritative.
Further, when we offer an explanation, an instance of behaviour whose causes are elusive to us can be integrated into a wider system of beliefs, preferences and values that contributes to the overall sense of who we are, which is often called identity. Particular choices fit a pattern of preferences and become part of comprehensive narratives, where reasons make sense of our past behaviour, and shape our future behaviour. If the research participants in the Nisbett and Wilson study attribute to themselves a general preference for brighter stockings or softer nightgowns, such a preference can also be used to interpret their previous behaviour or predict their future consumer choices.
Finally, when we confabulate, we share information about ourselves, and our choices can become an object of conversation and discussion. We receive external feedback on issues that are relevant to our choices, and we can revisit the reasons we use to explain our behaviour. If the panellists claim that their preference for John is due to his greater work experience, the fact that he is better than Arya in this respect can be challenged. John’s CV can be looked at again, leading to a change of preference.
Although our choices are often influenced by external cues and unconscious drives, we tend to see ourselves as competent and largely coherent agents who do and believe things for good reasons. This sense of agency is partly an illusion, but sustains our motivation to pursue our goals in critical circumstances. When we overestimate our competence, we tend to be more productive, more resilient, better at planning, and more effective at problem-solving. When we view our choices as driven by reasons, and integrate them in a coherent pattern of behaviour, we are more likely to fulfil our goals. The implications of explaining a particular choice on our overall sense of agency become more significant when the choice is self-defining, such as the vote for a political party at a general election or the choice of a life partner – also types of choices that we often explain in a confabulatory manner. Articulating reasons for self-defining choices can be a starting point for dialogue and reflection, potentially leading to change and self-improvement.
Someone could object here that a better-grounded explanation for our choice, including the accurate explanation (eg, ‘I chose this pair of stockings because of position effects, of which at the time I was unaware’), would be better than the confabulation (eg, ‘I chose this pair of stocking because it is more brightly coloured’), and also spare us from false beliefs. But even if the accurate explanation were available to us, it would be unlikely to play the same self-enhancing and self-integrating role as the confabulatory explanation. Explaining consumer choice based on an unconscious tendency to favour items on our right-hand side does not support the sense that we are competent and coherent agents. Confabulation compromises our understanding of reality and of ourselves, but, when it comes to supporting agency, it often fares better than a well-grounded explanation, or even the accurate one.
This article was originally published at Aeon and has been republished under Creative Commons.
One day in 1995, a large, heavy middle-aged man robbed two Pittsburgh banks in broad daylight. He didn't wear a mask or any sort of disguise. And he smiled at surveillance cameras before walking out of each bank. Later that night, police arrested a surprised McArthur Wheeler. When they showed him the surveillance tapes, Wheeler stared in disbelief. 'But I wore the juice,' he mumbled. Apparently, Wheeler thought that rubbing lemon juice on his skin would render him invisible to videotape cameras. After all, lemon juice is used as invisible ink so, as long as he didn't come near a heat source, he should have been completely invisible.
Police concluded that Wheeler was not crazy or on drugs – just incredibly mistaken.
The saga caught the eye of the psychologist David Dunning at Cornell University, who enlisted his graduate student, Justin Kruger, to see what was going on. They reasoned that, while almost everyone holds favourable views of their abilities in various social and intellectual domains, some people mistakenly assess their abilities as being much higher than they actually are. This 'illusion of confidence' is now called the 'Dunning-Kruger effect', and describes the cognitive bias to inflate self-assessment.
To investigate this phenomenon in the lab, Dunning and Kruger designed some clever experiments. In one study, they asked undergraduate students a series of questions about grammar, logic and jokes, and then asked each student to estimate his or her score overall, as well as their relative rank compared to the other students. Interestingly, students who scored the lowest in these cognitive tasks always overestimated how well they did – by a lot. Students who scored in the bottom quartile estimated that they had performed better than two-thirds of the other students!
This 'illusion of confidence' extends beyond the classroom and permeates everyday life. In a follow-up study, Dunning and Kruger left the lab and went to a gun range, where they quizzed gun hobbyists about gun safety. Similar to their previous findings, those who answered the fewest questions correctly wildly overestimated their knowledge about firearms. Outside of factual knowledge, though, the Dunning-Kruger effect can also be observed in people's self-assessment of a myriad of other personal abilities. If you watch any talent show on television today, you will see the shock on the faces of contestants who don't make it past auditions and are rejected by the judges. While it is almost comical to us, these people are genuinely unaware of how much they have been misled by their illusory superiority.
Sure, it's typical for people to overestimate their abilities. One study found that 80 per cent of drivers rate themselves as above average – a statistical impossibility. And similar trends have been found when people rate their relative popularity and cognitive abilities. The problem is that when people are incompetent, not only do they reach wrong conclusions and make unfortunate choices but, also, they are robbed of the ability to realise their mistakes. In a semester-long study of college students, good students could better predict their performance on future exams given feedback about their scores and relative percentile. However, the poorest performers showed no recognition, despite clear and repeated feedback that they were doing badly. Instead of being confused, perplexed or thoughtful about their erroneous ways, incompetent people insist that their ways are correct. As Charles Darwin wrote in The Descent of Man (1871): 'Ignorance more frequently begets confidence than does knowledge.'
Interestingly, really smart people also fail to accurately self-assess their abilities. As much as D- and F-grade students overestimate their abilities, A-grade students underestimate theirs. In their classic study, Dunning and Kruger found that high-performing students, whose cognitive scores were in the top quartile, underestimated their relative competence. These students presumed that if these cognitive tasks were easy for them, then they must be just as easy or even easier for everyone else. This so-called 'imposter syndrome' can be likened to the inverse of the Dunning-Kruger effect, whereby high achievers fail to recognise their talents and think that others are equally competent. The difference is that competent people can and do adjust their self-assessment given appropriate feedback, while incompetent individuals cannot.
And therein lies the key to not ending up like the witless bank robber. Sometimes we try things that lead to favourable outcomes, but other times – like the lemon juice idea – our approaches are imperfect, irrational, inept or just plain stupid. The trick is to not be fooled by illusions of superiority and to learn to accurately reevaluate our competence. After all, as Confucius reportedly said, real knowledge is knowing the extent of one's ignorance.
This article was originally published at Aeon and has been republished under Creative Commons.
80% of adults are overly optimistic about life—where does that cognitive bias come from?
There's one brain bias that affects 80% of adults and it has a familiar name you may not expect: optimism. Not always thought of as a cognitive mechanism, the optimism bias leads people to overestimate the likelihood of positive outcomes and to underestimate the likelihood of negative outcomes. It can be hugely helpful in our social lives and in keeping us motivated even if the trade off is, at times, the denial of reality. So where does this cognitive bias come from? Are we born with it, or do we develop it as we grow? Developmental psychologist Lori Markson compiles research about how optimism works in babies and young kids, and how that may help us to understand why we adults are the way we are. This video was filmed at the Los Angeles Hope Festival, a collaboration between Big Think and Hope & Optimism.
Natural selection has left us with a world of optimists—is this healthy?
Think you’re not an optimist? Neuroscience begs to differ. Dr. Tali Sharot explains that 80% of people globally present with the optimism bias—even if they describe themselves as pessimists or realists. In a nutshell, the optimism bias is the tendency to think that the future will be better than the past or present, and to underestimate negative experiences, and overestimate positive ones. This is neither a good nor bad thing, but rather it's both: we evolved to be optimistic because our primordial ancestors needed to think that there was something better out there, beyond the cave, in order to survive, migrate, and evolve. Optimism is a powerful motivator and has proven health benefits, but it also has downsides. Here, Sharot explains that delicate balance, and how understanding the nature of our cognitive biases can help us better protect ourselves against failure.
This video was filmed at the Los Angeles Hope Festival, a collaboration between Big Think and Hope & Optimism, a three-year initiative which supported interdisciplinary academic research into significant questions that remain under-explored.
Tali Sharot's newest book is available for pre-order: The Influential Mind: What the Brain Reveals about Our Power to Change Others.