The Sartre Fallacy Part II: Is It Inevitable?
If we know that we are bad at predicting and can account for the underlying psychology then why do we continue to make bad predictions?
Years before my run in with Monseiur Sartre I landed a summer job in the painting business. If you’ve painted houses perhaps you ran into the same problem I did: poor planning. One summer I discovered that a one week job took closer to two weeks; a three week job lasted about a month and a half, and so on. I devised a rule of thumb: double your completion date. The problem is I didn’t stick to this heuristic even though I knew it was true. Why? Experience and knowledge do not necessarily improve judgment; we’ve seen, in fact, that sometimes the opposite occurs. The mind is stubborn – we stick to our intuitions despite the evidence.
Let’s go beyond the anecdote. In the spring of 2005 Bent Flyvbjerg, Mette K. Skamris Holm, and Søren L. Buhl published an article in Journal of the American Planning Association that presented “results from the ﬁrst statistically signiﬁcant study of trafﬁc forecasts in transportation infrastructure projects.” The paper gathered data from rail and road projects undertaken worldwide between 1969 and 1998. They found that in over 90 percent of rail projects the ridership was overestimated and that 90 percent of rail and road projects fell victim to cost overrun. Worse, although it became obvious that most planners underestimate the required time and money their accuracy actually declined over the years. Today a sizable engineering feat completed on time and within budget is an imaginary one.
In Thinking, Fast and Slow Daniel Kahneman describes the planning fallacy as “plans or forecasts that are unrealistically close to best-case scenarios.” Two dramatic examples come to mind. In 1957 The Sydney Opera house was estimated to cost $7 million (Australian dollars) and the completion date was set for early 1963. It opened in 1973 with a price tag of $102 million. Boston’s Big Dig was nearly one decade late and $12 billion dollars overpriced. The one exception that I can think of from the engineering world is New York’s Empire State Building, completed in 410 days, several months ahead of schedule, at $24.7 million, which is close to half of the projected $43 million.
Around the time I was painting houses I discovered more examples of the planning fallacy in other domains. I eventually landed on this question: If we know that we are bad at predicting and can account for the underlying psychology then why do we continue to make bad predictions? Kahneman suggests that to improve predictions we should consult “the statistics of similar cases.” However, I realized that the two biases that contribute to the planning fallacy, overconfidence and optimism, also distort an effort to use similar cases to generate more objective projections. Even when we have access to the knowledge required to make a reasonable estimation we choose to ignore it and focus instead on illusionary best-case scenarios.
This idea returns me to my last post where I coined the term the Sartre Fallacy to describe cases in which acquiring information that warns or advocates against X influences us to do X. I named the fallacy after de Beauvoir’s lover because I acted like a pseudo-intellectual, thereby living less authentically, after reading Being and Nothingness. I noticed other examples from cognitive psychology. Learning about change blindness caused participants in one study to overestimate their vulnerability to the visual mistake. They suffered from change blindness blindness. The planning fallacy provides another example. When planners notice poor projections made in similar projects they become more confident instead of making appropriate adjustments (“We’ll never be that over budget and that late”). This was my problem. When I imagined the worst-case scenario my confidence in the best-case scenario increased.
After I posted the article I was happy to notice an enthusiastic response in the comment section. Thanks to the sagacity of my commenters I identified a problem with the Sartre Fallacy. Here it is; follow closely. If you concluded from the previous paragraph that you would not make the same mistake as the participants who committed change blindness blindness then you’ve committed what I cheekily term the Sartre Fallacy Fallacy (or change blindness x3). If you conclude from the previous sentence that you would not commit the Sartre Fallacy Fallacy (or change blindness x3) then, mon ami, you’ve committed the Sartre Fallacy Fallacy Fallacy (or change blindness x4). I’ll stop there. The idea, simply, is that we tend to read about biases and conclude that we are immune from them because we know they exist. This is of course itself a bias and as we’ve seen it quickly leads to an ad infinitum problem.
The question facing my commentators and me is if the Sartre Fallacy is inevitable. For the automatic, effortless, stereotyping, overconfident, quick judging System 1 the answer is yes. Even the most assiduous thinkers will jump to the conclusion that they are immune to innate biases after reading about innate biases, if only for a split second. Kahneman himself notes that after over four decades researching human error he (his System 1) still commits the mistakes his research demonstrates.
But this does not imply that the Sartre Fallacy is unavoidable. Consider a study published in 1996. Lyle Brenner and two colleagues gave students from San Jose State University and Stanford fake legal scenarios. There were three groups: one heard from one lawyer, the second heard from another lawyer, and the third, a mock jury, heard both sides. The bad news is that even though the participants were aware of the setup (they knew that they were only hearing one side or the entire story), those who heard one-sided evidence provided more confident judgments than those who saw both sides. However, the researchers also found that simply prompting participants to consider the other side’s story reduced their bias. The deliberate, effortful, calculating System 2 is capable of rational analysis; we simply need a reason to engage it.
A clever study by Ivan Hernandez and Jesse Lee Preston provides another reason for optimism. In one experiment liberal and conservative participants read a short pro-capital punishment article. There were two conditions. The fluent condition read that article in 12-point Times New Roman font; the disfluent condition read the article in an italicized Haettenschweiler font presented in a light gray bold. It was difficult to read and that was the point. Hernandez and Preston found that participants in the later condition “with prior attitudes on an issue became less extreme after reading an argument on the issues in a disﬂuent format.” We run on autopilot most of the time. Sometimes offsetting biases means pausing, and giving System 2 a chance to assess the situation more carefully.
One last point. If the Sartre Fallacy was inevitable then we could not account for moral progress. The Yale psychologist Paul Bloom observes in a brief but cogent article for Nature that rational deliberation played a large part in eliminating “beliefs about the rights of women, racial minorities and homosexuals… [held] in the late 1800s.” Bloom’s colleague Steven Pinker similarity argues that reason is one of our “better angels” that helped reduce violence over the millennia:
Reason is… an open-ended combinatorial system, an engine for generating an unlimited number of new ideas. Once it is programmed with a basic self-interest and an ability to communicate with others, its own logic will impel it, in the fullness of time, to respect the interest of ever-increasing numbers of others. It is reason too that can always take note of the shortcomings of previous exercises of reasoning, and update and improve itself in response. And if you detect a flaw in this argument, it is reason that allows you to point it out and defend an alternative.
When Hume noted that, “reason is, and ought to be, only the slave of the passion” he was not suggesting that since irrationality is widespread we should lie back and enjoy the ride. He was making the psychological observation that our emotions mostly run the show and advising a counter strategy: we should use reason to evaluate the world more accurately in order to decide and behave better. The Sartre Fallacy is not inevitable, just difficult to avoid.
Image via Wikipedia Commons
- The meaning of the word 'confidence' seems obvious. But it's not the same as self-esteem.
- Confidence isn't just a feeling on your inside. It comes from taking action in the world.
- Join Big Think Edge today and learn how to achieve more confidence when and where it really matters.
If you're lacking confidence and feel like you could benefit from an ego boost, try writing your life story.
In truth, so much of what happens to us in life is random – we are pawns at the mercy of Lady Luck. To take ownership of our experiences and exert a feeling of control over our future, we tell stories about ourselves that weave meaning and continuity into our personal identity.
Researchers hope the technology will further our understanding of the brain, but lawmakers may not be ready for the ethical challenges.
- Researchers at the Yale School of Medicine successfully restored some functions to pig brains that had been dead for hours.
- They hope the technology will advance our understanding of the brain, potentially developing new treatments for debilitating diseases and disorders.
- The research raises many ethical questions and puts to the test our current understanding of death.
The image of an undead brain coming back to live again is the stuff of science fiction. Not just any science fiction, specifically B-grade sci fi. What instantly springs to mind is the black-and-white horrors of films like Fiend Without a Face. Bad acting. Plastic monstrosities. Visible strings. And a spinal cord that, for some reason, is also a tentacle?
But like any good science fiction, it's only a matter of time before some manner of it seeps into our reality. This week's Nature published the findings of researchers who managed to restore function to pigs' brains that were clinically dead. At least, what we once thought of as dead.
What's dead may never die, it seems
The researchers did not hail from House Greyjoy — "What is dead may never die" — but came largely from the Yale School of Medicine. They connected 32 pig brains to a system called BrainEx. BrainEx is an artificial perfusion system — that is, a system that takes over the functions normally regulated by the organ. The pigs had been killed four hours earlier at a U.S. Department of Agriculture slaughterhouse; their brains completely removed from the skulls.
BrainEx pumped an experiment solution into the brain that essentially mimic blood flow. It brought oxygen and nutrients to the tissues, giving brain cells the resources to begin many normal functions. The cells began consuming and metabolizing sugars. The brains' immune systems kicked in. Neuron samples could carry an electrical signal. Some brain cells even responded to drugs.
The researchers have managed to keep some brains alive for up to 36 hours, and currently do not know if BrainEx can have sustained the brains longer. "It is conceivable we are just preventing the inevitable, and the brain won't be able to recover," said Nenad Sestan, Yale neuroscientist and the lead researcher.
As a control, other brains received either a fake solution or no solution at all. None revived brain activity and deteriorated as normal.
The researchers hope the technology can enhance our ability to study the brain and its cellular functions. One of the main avenues of such studies would be brain disorders and diseases. This could point the way to developing new of treatments for the likes of brain injuries, Alzheimer's, Huntington's, and neurodegenerative conditions.
"This is an extraordinary and very promising breakthrough for neuroscience. It immediately offers a much better model for studying the human brain, which is extraordinarily important, given the vast amount of human suffering from diseases of the mind [and] brain," Nita Farahany, the bioethicists at the Duke University School of Law who wrote the study's commentary, told National Geographic.
An ethical gray matter
Before anyone gets an Island of Dr. Moreau vibe, it's worth noting that the brains did not approach neural activity anywhere near consciousness.
The BrainEx solution contained chemicals that prevented neurons from firing. To be extra cautious, the researchers also monitored the brains for any such activity and were prepared to administer an anesthetic should they have seen signs of consciousness.
Even so, the research signals a massive debate to come regarding medical ethics and our definition of death.
Most countries define death, clinically speaking, as the irreversible loss of brain or circulatory function. This definition was already at odds with some folk- and value-centric understandings, but where do we go if it becomes possible to reverse clinical death with artificial perfusion?
"This is wild," Jonathan Moreno, a bioethicist at the University of Pennsylvania, told the New York Times. "If ever there was an issue that merited big public deliberation on the ethics of science and medicine, this is one."
One possible consequence involves organ donations. Some European countries require emergency responders to use a process that preserves organs when they cannot resuscitate a person. They continue to pump blood throughout the body, but use a "thoracic aortic occlusion balloon" to prevent that blood from reaching the brain.
The system is already controversial because it raises concerns about what caused the patient's death. But what happens when brain death becomes readily reversible? Stuart Younger, a bioethicist at Case Western Reserve University, told Nature that if BrainEx were to become widely available, it could shrink the pool of eligible donors.
"There's a potential conflict here between the interests of potential donors — who might not even be donors — and people who are waiting for organs," he said.
It will be a while before such experiments go anywhere near human subjects. A more immediate ethical question relates to how such experiments harm animal subjects.
Ethical review boards evaluate research protocols and can reject any that causes undue pain, suffering, or distress. Since dead animals feel no pain, suffer no trauma, they are typically approved as subjects. But how do such boards make a judgement regarding the suffering of a "cellularly active" brain? The distress of a partially alive brain?
The dilemma is unprecedented.
Setting new boundaries
Another science fiction story that comes to mind when discussing this story is, of course, Frankenstein. As Farahany told National Geographic: "It is definitely has [sic] a good science-fiction element to it, and it is restoring cellular function where we previously thought impossible. But to have Frankenstein, you need some degree of consciousness, some 'there' there. [The researchers] did not recover any form of consciousness in this study, and it is still unclear if we ever could. But we are one step closer to that possibility."
She's right. The researchers undertook their research for the betterment of humanity, and we may one day reap some unimaginable medical benefits from it. The ethical questions, however, remain as unsettling as the stories they remind us of.
A space memorial company plans to launch the ashes of "Pikachu," a well-loved Tabby, into space.
- Steve Munt, Pikachu's owner, created a GoFundMe page to raise money for the mission.
- If all goes according to plan, Pikachu will be the second cat to enter space, the first being a French feline named Felicette.
- It might seem frivolous, but the cat-lovers commenting on Munt's GoFundMe page would likely disagree.
SMARTER FASTER trademarks owned by The Big Think, Inc. All rights reserved.