Think Tank

You're So Predictable. Daniel Kahneman and the Science of Human Fallibility

I will never know if my vocation as a psychologist was a result of my early exposure to interesting gossip, or whether my interest in gossip was an indication of a budding vocation. Like many other Jews, I suppose, I grew up in a world that consisted exclusively of people and words, and most of the words were about people.  . . . the people my mother liked to talk about with her friends and with my father were fascinating in their complexity. Some people were better than others, but the best were far from perfect and no one was simply bad. 

– Daniel Kahneman, Autobiography Upon Winning the Nobel Prize


In 2002, Psychologist Daniel Kahneman won the Nobel Prize for his work in Behavioral Economics. One of the more remarkable things about his acceptance speech and the brief autobiography he submitted upon winning the prize is the care he took to acknowledge the contributions of other people to his life’s work – work which has mapped out two systems of human thinking – the fast (intuitive) and the slow (deliberative) – and the many pitfalls each is subject to. 

This was no “I want to thank all the little people” Oscars toast. As a researcher and theorist Kahneman has dedicated his life to exposing the illusions that color all human judgment, including his own. In a sense, he and his colleagues have been at war for decades with our tendency to lie to ourselves. And judging from his own clear-eyed account of his work, his “adversarial collaboration” model for bridging fierce disagreements in the sciences, and the profound influence his work has exerted on how psychologists and economists think about decision-making, Kahneman is winning. 

The Illusion of Validity

 As a young man, Kahneman spent a year in the Psychology branch of the Israeli Defense Forces. He was tasked with identifying “leadership material” among officer training candidates. The test was a leaderless challenge in which eight candidates had to lift a telephone over a wall without touching the pole to the ground or the wall, and without making contact with the wall themselves. One or two natural leaders inevitably emerged and took charge of the situation. Case closed, right? Not exactly.  

Kahneman: We were looking for manifestations of the candidates' characters, and we saw plenty: true leaders, loyal followers, empty boasters, wimps - there were all kinds. Under the stress of the event, we felt, the soldiers' true nature would reveal itself, and we would be able to tell who would be a good leader and who would not. But the trouble was that, in fact, we could not tell. Every month or so we had a "statistics day," during which we would get feedback from the officer-training school, indicating the accuracy of our ratings of candidates' potential. The story was always the same: our ability to predict performance at the school was negligible.

Fascinated by the total disconnect between the confidence he and his colleagues felt about their own judgment of “character,” and the instability of those perceived character traits over time, Kahneman coined the phrase “the illusion of validity.” He was to spend much of the rest of his career rooting out such characteristic flaws in human thinking. This is the real contribution of Kahneman’s work, for which he won the Nobel Prize in 2002 – going beyond “to err is human” to pinpoint the patterns of (frequently poor) decision making to which we’re prone as a species. 


An Extremely Reductionist List of Some of the Flaws Kahneman Has Identified in Human Judgment:

  • Confusion between the “experiencing self” and the “remembering self.” For example, saying “that cell phone going off ruined the concert for me,” when in fact, it had ruined only your memory of the concert – not your experience of enjoyment before the cell phone rang. 
  • The focusing illusion: We can’t think about any factor that affects well being without distorting its importance. For example, people tend to believe that moving to California will make them happier, which turns out not to be true at all. We also tend to overestimate how much happier an increase in income will make us. 
  • Loss Aversion: People’s dislike of losing is about twice as strong as our enjoyment of winning. In practical terms, this means we’re twice as likely to switch insurance carriers if our policy’s rates go up than if a competitor’s rates go down.
  • Optimism Bias: We tend to overestimate the likelihood of positive outcomes. Thus, most new restaurant owners think they will succeed, even in cities with a 65% failure rate. This tendency is in a kind of perpetual tug-of-war with loss aversion. 
  • Attribute Substitution: When faced with a complex problem, we tend to unconsciously simplify it. Our response, therefore, is often the solution to a related, but completely different problem. This is part of a general psychological tendency to avoid expending too much energy on decision making, and explains many forms of bias. What is racism, after all, besides a shortcut to judging another person’s intelligence or value? 


Kahneman and Tversky: The Mega-Brain  

It is deeply touching to hear Daniel Kahneman talk about his collaboration with his longtime friend and colleague, Amos Tversky, who died in 1996 of metastatic melanoma. Theirs was one of those rare meetings of two intelligences ideally matched – sufficiently alike to communicate seamlessly, yet different enough that their work together was a kind of ongoing, high-level play. Together, says Kahneman, they did better work than either man was capable of on his own. 

Daniel Kahneman: We spent virtually our entire working day together, for years, talking.  Fortunately, I was a morning and he was a night person, so basically our joint working day would be from lunch until dinner.  We were looking for incorrect intuitions in our own thinking.  So we were constructing problems.  We knew the correct solutions, but we were checking whether our intuitive response or immediate response was different from the correct one, or sometimes we were looking for statistics and asking “are these statistics counterintuitive?”  It was a lot of fun.  

Another thing that we were able to do, which people find difficult, is we’re both extremely critical and difficult people, but we were absolutely uncritical with respect to each other and we took each other very seriously.  I don’t think that over the years that we were together either one of us dismissed what the other one had said out of hand, and it wasn’t out of politeness.  It’s just that we assumed that if the other was saying something there might be something in it. 

We were exceptionally lucky in our collaboration.  Together we simply had a better mind than either of us separately and it’s very clear from our joint record we both did, I think, very good work independently of each other, but the work that we did together is just better.  The greatest joy of the collaboration for me especially was that Amos would frequently understand me better than I understood myself. 

Adversarial Collaboration

The fluidity and joy of his work with Tversky, and his own, deep-seated aversion to anger led Kahneman to the concept of “adversarial collaboration” – a structured attempt to bridge disagreements with other scientists through joint studies testing the validity of their conflicting claims. “In the interest of science and civility,” Kahneman co-authored several papers with colleagues hostile to his ideas. Although he admits that adversarial collaboration demands a level of humility that is psychologically challenging for most people (you have to be willing to be wrong and to spend a lot of time with people who annoy you), it’s an unprecedented model for productive academic discourse. 

More broadly, it’s a gesture toward a kind of civility that is increasingly rare (or at least invisible) in academia and society at large, drowned out by conflict-driven politics, media, and the babble from online spaces where anonymity brings out the worst in human nature.

Above all else, Kahneman’s legacy will be a precise, empirical reminder of our own fallibility, and a roadmap of the cognitive traps to which we're most vulnerable. 


Follow Jason Gots (@jgots) on Twitter


comments powered by Disqus