How Do We Deal with Rare Events? A Postmortem of the Week that Shook and Stormed the East Coast
A 5.8 earthquake hits the East Cost. New Yorkers quake. Californians laugh. Along comes a Category 1—no wait, tropical storm—hurricane. Now, not all New Yorkers are quaking; instead, while some quake, others laugh. And not all Californians are laughing; while some laugh, others offer helpful hints on dealing with a dangerous foe. Why the different responses? Why any range of reactions in the first place? One of the main drivers is something called the description-experience gap.
We learn differently from description than we do from experience
In the past, researchers thought that people generally tend to overweight the probability of a rare event: we think we are more likely to experience it than is the case given objective probabilities. We are more afraid of dying in a terrorist attack than a heart attack, of being in a plane crash than a car crash. In other words, we both overestimate and overweight small probabilities, in keeping with the predictions made by Tversky and Kahneman’s Prospect Theory.
However, in recent years, new work has shown that this view is too simplistic. Not only does the rarity of the event matter, but equally important is how we learn about it.
Enter the description-experience gap. When we’re trying to gauge the likelihood of a rare event, such as an earthquake or a hurricane, we perceive both the event and its risk-reward tradeoff in very different ways when we’re learning from description than we’re learning from experience. When we learn from experience, the estimates actually flip: we become prone to underestimating and underweighting the probabilities. And in real life, we tend to learn more often from experience than we do from description.
Here’s a simple illustration. In a 2004 study, experiments divided participants into two groups, a description group and an experience group. Those in the description group received information in much the same way that past studies had provided it: as lists of choices. For instance, they would see something that read:
A: Get $4 with probability .8, $0 otherwise
B: Get $3 for sure.
They would then make their choice.
Note that in this problem, choice A gives you the higher Expected Value ($3.20), so if you care most about maximizing the dollar amount you receive, it’s the one you should choose. I’ll get back to that in a moment.
In the experience group, the problems were identical – except this time, instead of seeing the probabilities listed as two options, participants just saw two buttons and were told that each button had a certain payoff distribution. They could then sample the two buttons—or press them to see the outcomes—in whatever order they wanted, for as many times as they wanted. When they were satisfied with this sampling, they would choose one of the two options for the real payoff. In each group, the process was repeated for a total of 25 choices.
The researchers found a striking difference in the two groups. For the group that learned by description, only 36% chose A, the value-maximizing option, in this specific example. In contrast, in the group that learned by experiencing the two outcomes, an overwhelming 88% of participants did so. The gap remained even on questions where the options were negative in value (so, in a choice of losing $3 for certain or $4 with an 80% probability, the experience-based group chose the certain loss while the description group chose the gamble).
Why was this happening? Just as in real life experiential learning, those who were learning from their experience were underweighting the chances of rare events given objective probabilities, going against the natural risk tolerance tendencies that Prospect Theory nicely captures (we tend to be risk-averse when it comes to gains, preferring to receive a specific amount for sure than a potentially greater amount with some probability, and risk-seeking when it comes to losses, preferring to gamble on a loss than suffer a certain loss, even if the amount of the gamble is larger than the amount we’d lose for sure).
How the description-experience gap plays out in natural disasters
And now, we come to the natural disasters. Why are responses so dramatically different both in terms of geography and even between specific individuals in the same area?
First, the earthquake. Californians laugh: they have been learning repeatedly from experience. They thus tend to underestimate the likelihood of the rare event – and underplay its potential impact. Partly, this makes sense: most earthquakes are small ones and damage is limited. But what will happen in the case of another quake like the 1906 disaster that almost destroyed San Francisco? Chances are, those that choose to live in the city underestimate the probability of such an occurrence and underweight its potential impact on their lives. And when it does come around, as predictions say it eventually must, the response may end up somewhat delayed.
Now, the hurricane is a slightly different story. First, there’s a question of timing: an earthquake comes with little warning; a hurricane is watched for days. Here, we would expect to see the same “laughter” from those who often experience hurricane watches and warnings and the same tension from those who do not (the one underestimates the likelihood of its making landfall as expected, with the strength and direction predicted early on, and underweights the danger of destruction, the other does the opposite).
However, here is where we also come to differences that are not as obvious in the case of earthquakes. The experience of those offering advice, reacting, and making decisions as to their own actions may differ significantly. First, how long ago was the rare event experienced? Here, something called the recency effect comes into play: things experienced more recently outweigh those experienced further in the past. Was the last warning followed by a massive storm? Then, you’re probably more likely to react to this one. Was the last one far weaker than predicted? Then, you’re probably less likely to react to this one. And, if you’ve ever experienced a truly devastating instance, that will probably not be forgotten as easily – whereas if you’ve only ever gotten off easily (as most people have in California, when it comes to earthquakes), you’re again more likely than not to underestimate the chances of something going wrong.
We can’t take our experience for granted
Here, then, we have possible explanations for why some people don’t evacuate despite warnings: first, they underestimate probabilities, based on their experiences, and second, they take their own recent experience as a guideline (“I was fine last time; why should this time be different?”). And that’s all well and good – until it isn’t. That’s the thing about rare events. They are rare for a reason. You can’t predict the impact of one based on another (Are Katrina victims likely to wave off hurricane warnings in the future, as many did prior to the 2005 disaster? Are the victims of the Fukushima earthquake likely to laugh at people’s overreactions?).
Yes, rare events are rare. You are unlikely to suffer from any given one. But we do have a tendency to outsmart ourselves, thinking that we know best because we’ve been there before. We haven’t been there before. No one has. And even if Irene has ended up being less destructive than predicted, that does not mean that future warnings should be taken any less seriously. Just ask those who’ve survived the truly devastating rare events of the century.
If you'd like to receive information on new posts and other updates, follow Maria on Twitter @mkonnikova
[photo credit: courtesy of Ennuipoet's flickr photostream]
The Russian-built FEDOR was launched on a mission to help ISS astronauts.
Most people think human extinction would be bad. These people aren't philosophers.
- A new opinion piece in The New York Times argues that humanity is so horrible to other forms of life that our extinction wouldn't be all that bad, morally speaking.
- The author, Dr. Todd May, is a philosopher who is known for advising the writers of The Good Place.
- The idea of human extinction is a big one, with lots of disagreement on its moral value.
Picking up where we left off a year ago, a conversation about the homeostatic imperative as it plays out in everything from bacteria to pharmaceutical companies—and how the marvelous apparatus of the human mind also gets us into all kinds of trouble.
- "Prior to nervous systems: no mind, no consciousness, no intention in the full sense of the term. After nervous systems, gradually we ascend to this possibility of having to this possibility of having minds, having consciousness, and having reasoning that allows us to arrive at some of these very interesting decisions."
- "We are fragile culturally and socially…but life is fragile to begin with. All that it takes is a little bit of bad luck in the management of those supports, and you're cooked…you can actually be cooked—with global warming!"