"Our results show why debates about controversial issues often seem so futile," the researchers said.
In what feels like an increasingly polarised world, trying to convince the "other side" to see things differently often feels futile. Psychology has done a great job outlining some of the reasons why, including showing that, regardless of political leanings, most people are highly motivated to protect their existing views.
However a problem with some of this research is that it is very difficult to concoct opposing real-life arguments of equal validity, so as to make a fair comparison of people's treatment of arguments they agree and disagree with.
To get around this problem, an elegant new paper in the Journal of Cognitive Psychology has tested people's ability to assess the logic of formal arguments (syllogisms) structured in the exact same way, but that featured wording that either confirmed or contradicted their existing views on abortion. The results provide a striking demonstration of how our powers of reasoning are corrupted by our prior attitudes.
Vladimíra Čavojová at the Slovak Academy of Sciences and her colleagues recruited 387 participants in Slovakia and Poland, mostly university students. The researchers first assessed the students' views on abortion (a highly topical and contentious issue in both countries), then they presented them with 36 syllogisms – these are formal logical arguments that come in the form of three statements (see examples, below).
The participants' challenge was to determine whether the third statement of each syllogism followed logically from the first two, always assuming that those initial two premises were true. This was a test of pure logical reasoning – to succeed at the task, one only needs to assess the logic, putting aside one's prior knowledge or beliefs (to reinforce that this was a test of logic, the participants were instructed to always treat the first two premises of each syllogism as true).
Crucially, while some of the syllogisms were neutral, others featured a final statement germane to the abortion debate, either on the side of pro-life or pro-choice (but remember this was irrelevant to the logical consistency of the syllogisms).
Čavojová and her team found that the participants' existing attitudes to abortion interfered with their powers of logical reasoning – the size of this effect was modest but statistically significant.
Mainly the participants had trouble accepting as logical those valid syllogisms that contradicted their existing beliefs, and similarly they found it difficult to reject as illogical those invalid syllogisms that conformed with their beliefs. This seemed to be particularly the case for participants with more pro-life attitudes. What's more, this "my-side bias" was actually greater among participants with prior experience or training in logic (the researchers aren't sure why, but perhaps prior training in logic gave participants even greater confidence to accept syllogisms that supported their current views – whatever the reason, it shows again what a challenge it is for people to think objectively).
"Our results show why debates about controversial issues often seem so futile," the researchers said. "Our values can blind us to acknowledging the same logic in our opponent's arguments if the values underlying these arguments offend our own."
This is just the latest study that illustrates the difficulty we have in assessing evidence and arguments objectively. Related research that we've covered recently has also shown that: our brains treat opinions we agree with as facts; that many of us over-estimate our knowledge; how we're biased to see our own theories as accurate; and that when the facts appear to contradict our beliefs, well then we turn to unfalsifiable arguments. These findings and others show that thinking objectively does not come easily to most people.
People think that stereotypes are true but also that it is not acceptable to admit this and therefore say they are false. Moreover, they say this to themselves too, in inner speech.
Do you think racial stereotypes are false? Are you sure? I’m not asking if you’re sure whether or not the stereotypes are false, but if you’re sure whether or not you think that they are. That might seem like a strange question. We all know what we think, don’t we?
Most philosophers of mind would agree, holding that we have privileged access to our own thoughts, which is largely immune from error. Some argue that we have a faculty of ‘inner sense’, which monitors the mind just as the outer senses monitor the world. There have been exceptions, however. The mid-20th-century behaviourist philosopher Gilbert Ryle held that we learn about our own minds, not by inner sense, but by observing our own behaviour, and that friends might know our minds better than we do. (Hence the joke: two behaviourists have just had sex and one turns to the other and says: ‘That was great for you, darling. How was it for me?’) And the contemporary philosopher Peter Carruthers proposes a similar view (though for different reasons), arguing that our beliefs about our own thoughts and decisions are the product of self-interpretation and are often mistaken.
Evidence for this comes from experimental work in social psychology. It is well established that people sometimes think they have beliefs that they don’t really have. For example, if offered a choice between several identical items, people tend to choose the one on the right. But when asked why they chose it, they confabulate a reason, saying they thought the item was a nicer colour or better quality. Similarly, if a person performs an action in response to an earlier (and now forgotten) hypnotic suggestion, they will confabulate a reason for performing it. What seems to be happening is that the subjects engage in unconscious self-interpretation. They don’t know the real explanation of their action (a bias towards the right, hypnotic suggestion), so they infer some plausible reason and ascribe it to themselves. They are not aware that they are interpreting, however, and make their reports as if they were directly aware of their reasons.
Many other studies support this explanation. For example, if people are instructed to nod their heads while listening to a tape (in order, they are told, to test the headphones), they express more agreement with what they hear than if they are asked to shake their heads. And if they are required to choose between two items they previously rated as equally desirable, they subsequently say that they prefer the one they had chosen. Again, it seems, they are unconsciously interpreting their own behaviour, taking their nodding to indicate agreement and their choice to reveal a preference.
Building on such evidence, Carruthers makes a powerful case for an interpretive view of self-knowledge, set out in his book The Opacity of Mind (2011). The case starts with the claim that humans (and other primates) have a dedicated mental subsystem for understanding other people’s minds, which swiftly and unconsciously generates beliefs about what others think and feel, based on observations of their behaviour. (Evidence for such a ‘mindreading’ system comes from a variety of sources, including the rapidity with which infants develop an understanding of people around them.) Carruthers argues that this same system is responsible for our knowledge of our own minds. Humans did not develop a second, inward-looking mindreading system (an inner sense); rather, they gained self-knowledge by directing the outward-looking system upon themselves. And because the system is outward-looking, it has access only to sensory inputs and must draw its conclusions from them alone. (Since it has direct access to sensory states, our knowledge of what we are experiencing is not interpretative.)
The reason we know our own thoughts better than those of others is simply that we have more sensory data to draw on – not only perceptions of our own speech and behaviour, but also our emotional responses, bodily senses (pain, limb position, and so on), and a rich variety of mental imagery, including a steady stream of inner speech. (There is strong evidence that mental images involve the same brain mechanisms as perceptions and are processed like them.) Carruthers calls this the Interpretive Sensory-Access (ISA) theory, and he marshals a huge array of experimental evidence in support of it.
The ISA theory has some startling consequences. One is that (with limited exceptions), we do not have conscious thoughts or make conscious decisions. For, if we did, we would be aware of them directly, not through interpretation. The conscious events we undergo are all sensory states of some kind, and what we take to be conscious thoughts and decisions are really sensory images – in particular, episodes of inner speech. These images might express thoughts, but they need to be interpreted.
Another consequence is that we might be sincerely mistaken about our own beliefs. Return to my question about racial stereotypes. I guess you said you think they are false. But if the ISA theory is correct, you can’t be sure you think that. Studies show that people who sincerely say that racial stereotypes are false often continue to behave as if they are true when not paying attention to what they are doing. Such behaviour is usually said to manifest an implicit bias, which conflicts with the person’s explicit beliefs. But the ISA theory offers a simpler explanation. People think that the stereotypes are true but also that it is not acceptable to admit this and therefore say they are false. Moreover, they say this to themselves too, in inner speech, and mistakenly interpret themselves as believing it. They are hypocrites but not conscious hypocrites. Maybe we all are.
If our thoughts and decisions are all unconscious, as the ISA theory implies, then moral philosophers have a lot of work to do. For we tend to think that people can’t be held responsible for their unconscious attitudes. Accepting the ISA theory might not mean giving up on responsibility, but it will mean radically rethinking it.
This article was originally published at Aeon and has been republished under Creative Commons.
Your brain stops at the most comforting thought. The truth is somewhere beyond that. Using scientific skepticism as a guide, astrophysicist Lawrence Krauss outlines the questions that critical thinkers ask themselves.
Strange answers aren’t inherently wrong, and satisfying answers aren’t inherently right, says Lawrence Krauss in this critical thinking crash course. The astrophysicist explains how principles of scientific skepticism can be applied beyond the laboratory; it can be a filter for the nonsense and misinformation we encounter each and every day. Here, he establishes a handful of core questions that critical thinkers ask themselves, which can be used to challenge your misconceptions and sense of comfort, question inconsistency, and think past your brain's evolved biases. Piece by piece, you can systematically remove nonsense from your life. Lawrence Krauss' most recent book is The Greatest Story Ever Told -- So Far: Why Are We Here?
How did our world come to be ruled by a view of human nature that contradicts the testimony of much of history, and the bulk of the arts, and your daily experience? Mathoholics are to blame.
1. History will puzzle over our era’s ruling faith in rationalism. Behavioral economics is shaking that faith but as Nick Romeo notes, Plato described “cognitive biases” ~24 centuries ago.
2. And Plato is far from alone. Hasn’t every realistic writer described humanity’s everywhere-evident cognitive foibles? Except some math-obsessed economists?
3. Doesn’t history, and the arts, and daily experience, testify against those hyper-rational individualists of econo-models?
4. For instance, here's Shakespeare on confirmation bias: “Trifles light as air / Are to the jealous confirmations strong / As proofs...”
5. The gist of many cognitive biases shouldn’t surprise non-economists (“a bird in the hand is worth two in the bush” = “loss aversion”).
7. Beyond the fun of footnoting philosophy-founding dialogues with cognitive biases, Plato would have laughed at econo-rationalism.
9. He knew we’re irrationally persuadable. He hated sophists for teaching how to sell seductive surfaces over substance (marketing over product). Marketing, obviously, has always used cognitive biases (under-theorized).
10. Even as many economists declare that we’re rational optimizers, businesses operate on the profitable principle that there’s an easily manipulable fool born every minute.
11. But Plato abetted modern rationalism’s rise by popularizing math-lust. 2,000 years later “falling in love with geometry” was an Enlightenment “occupational hazard.” And today similar math-worship (for algebra + stats) drives economists to irrational math-oholic fantasies.
12. Largely unnoticed is how Plato’s dialogues dramatize the shortcomings of “cognitive individualism.”
13. Social cognition research shows that “individual knowledge is always remarkably shallow”—>“we never think alone.”
14. Isn’t it self-evident that we evolved to reason socially? Thinking, like every other significant aspect of human nature, evolved collectively and tribally (not econo-individualistically).
15. Intriguingly, while “confirmation bias” worsens solo thinking, it can improve group reasoning (other cognitive perspectives countering your biases—>don’t think alone, or with cognitive clones).
16. Countering cognitive individualism is how science succeeds (bias-balancing processes).
17. That famed-science-institution motto "take no man's word for it," also applies to your own word. Feeling sure that you’re right often isn’t a reliable intuition. We fall in love with ideas and methods and become blind to our beloved’s faults.
18. Math-method-loving economists strengthen faith in rationalism by routinely excluding "obvious empirical” facts if they’re not equation friendly. This “equation filtering” begets “theory-induced blindness” (field-wide method-level bias).
19. This math-fashioned folly must misrepresent us for its beloved math model-making to work. Arguing that models, like maps, must exclude details, fails because here we’re ignoring known roadblocks. There’s no efficient-allocation market nirvana without rationally optimizing masses.
20. Beyond the matho-pathology of unbehavioral economics, misplaced faith in rationalism enabled Donald Trump’s presidency. He grasps empirical psychology better than many rationalists. Every salesperson knows persuasion isn’t factual or logical, but unavoidably emotional, and trust-dependent (see Aristotle on ethos, pathos, logos).
21. Ways of life that deny our deeply limited, deeply flawed, deeply social nature are doomed to history’s dustbin.
Illustration by Julia Suits, The New Yorker cartoonist & author of The Extraordinary Catalog of Peculiar Inventions