Chicken Little or Cassandra? How to Save the World from Future Disasters Being Ignored Today

Global security expert Richard A. Clarke explains the traits of a "Cassandra"—someone who predicts colossal disasters—and why people very rarely listen to their warnings.

Richard Clarke: So we talk about a failed warning as a Cassandra event. And we try to ask ourselves in the book, why did this Cassandra event happen? We find that there are four overall factors. There is the quality of the Cassandra herself or himself. Then there’s the issue itself and the qualities about the issue that make a warning relevant to it hard for people to accept. And then the last is the critics. The critics of the person giving the warning. The critics of the Cassandra. What did they say and what did they not say? And in those four column headings—the Cassandra, the decision maker, the issue itself, and the critics—under each of them there are several different criteria. By applying that template to a potential Cassandra event, we think we can begin to tell who’s right and who’s Chicken Little.

When we look at the Cassandra herself or himself, that’s probably the most important determinant of whether or not we have a Cassandra event coming. The most important determinant in what we call the Cassandra coefficient is that human being. Because what we’re trying to do here is predict the future not by some algorithm or artificial intelligence program; we think the solution here to predicting the future is finding people, people who can do it. So what are the qualities of a Cassandra? They’re an expert. They’re an expert in their field and they are internationally recognized as such. This is not someone waking up in the middle of the night with a premonition. This is not someone who every year predicts some other disaster. The Cassandras typically have never predicted a disaster before. They’re experts, they’ve never predicted a disaster before, and they are data driven. Usually they run the program that collects the data that convinces them. And when they look at the data something pops right out at them. They see, through pattern recognition, in a flash, they see the problem that other experts in the field just don’t see, or don’t understand. So sometimes they see it first and later people come along and agree with them. But they see it so clearly, and then this other quality of the Cassandra kicks in—and that is necessary to be a Cassandra—and it is the belief in their own analysis, in their own data, is so strong that they feel they must do something about it. They all feel a personal sense of agency, responsibility. They have to give the warning and when they are not heeded, when they are ignored, when they are ridiculed, when in some cases they are muzzled or fired, that creates a negative feedback loop because they become more strident or insistent on getting their message out.

Sometimes that’s off-putting. Sometimes they are accused of being obstreperous or obsessive. Words like 'doom and gloom' and all sorts of criticisms are leveled on them. And what we found in the Cassandras that we know about, and in some of the ones we think might be Cassandras in the future, they all had the same type of criticism. These are people who fit what we learned, an Israeli psychiatrist calls sentinel intelligence. Sentinel intelligence, he described, is something that is in highly functioning, high anxiety people. It’s not something that you can learn, he thinks, it’s something that is innate. They're the person who’s sitting in the restaurant or the theater, smells smoke first, before anybody else, and doesn’t just say, 'Oh, isn’t that odd?' but they get up and with confidence pull the fire alarm.

Second factor in our Cassandra coefficient is the decision maker or the audience. One of the problems with these issues is that frequently there is not a decision maker or it’s a diffuse decision making process. It’s not clear who can pull the trigger, who can respond to the warning. Another characteristic of the decision maker is frequently that they have an agenda of their own. They have a position of responsibility. The president of the United States, president of a university, CEO of the corporation, they assumed that job with something in mind. They were going to do X, Y and Z. Suddenly somebody comes into the room and says, "You can’t do that. You can’t do your agenda. You have to pay attention to my agenda. And oh, by the way paying attention to my agenda is going to cause you to have to spend money in ways that you didn’t want to spend it. It’s going to make life inconvenient for people." Decision makers hearing that are very reluctant for the most part because they are so fixed on their agenda, they don’t want to hear about something that would pull them off from that agenda. Decision makers also in many of these cases, frankly are not experts and not trained in any way to understand what the Cassandra is saying. In the case of the Ponzi scheme by Bernie Madoff, Harry Markopolos came into the Securities and Exchange Commission six times with mathematical formulas and projections and charts to show the people at the SEC that Madoff was a Ponzi scheme. He was a quantitative analyst, he had lots of data. He was talking to lawyers. They didn’t get it. Very frequently the problem is that decision makers don’t understand, fundamentally don’t understand, the science or the math that the Cassandra is giving them. 

The issue about which the Cassandra warns is also very important in our Cassandra coefficient. If the issue is of such a magnitude that it would require enormous change decision makers don’t usually want to do that. If it’s a complex issue, people don’t understand it. If it’s an issue that has never happened before, people find an excuse for not acting. So sometimes it’s the very fact of the issue itself that suggests nothing is going to happen. For example, one of the ones we’re looking at today is the possibility of asteroid impact on Earth. David Morrison is our possible Cassandra. And when we ask people what do you think about the risk of a giant asteroid hitting the earth and destroying a city, people giggle, people laugh. The issue itself causes people not to take it seriously, in part because they’ve seen science fiction movies with Bruce Willis flying up into space and blowing up asteroids. But the issue itself just seems so ridiculous to them. They’ve never seen it happen before. What Morrison points out is it has happened before. It’s happened a lot in the history of this planet. It just hasn’t happened a lot in the history of this planet with human beings on it. 

The last thing we look at to decide if we’re dealing with a possible Cassandra event, the last element of our Cassandra coefficient, is the critics of the Cassandra. Do the critics take on the Cassandra and say, "No, your data is wrong. You collected the wrong data. You collected the data in the wrong way. You did the wrong kind of analysis on the data." Do they have a dialogue among experts or do they just reject it out of hand? Sometimes the critics are paid critics as in the case of the tobacco industry in the United States which bought experts to say that smoking was not a problem. And perhaps some oil companies in the past have bought experts to diminish the threat of global warming. So the question we ask is, what is the critic saying? For example, in the case of Dr. James Hansen, the noted NASA scientist now at Columbia University, who says that sea level rise will be much faster and higher than the UN prediction. When we ask other experts about that, they come up with all sorts of criticism, but they never say his data is wrong. And so when we ask them, "What’s wrong with Jim Hansen’s data?" They say, "Well he’s not really using the scientific method. He’s making leaps of faith." Back to Hansen: "How do you respond to that?" And he says, "If I use the scientific method I would have to melt Greenland. You want me to melt Greenland repeatedly to prove that it would change the world?" What Hansen says to us is, when the data is so clear that it meets the scientific method it will be too late. We call that scientific reticence. Scientific reticence is when scientists are reluctant to make a conclusion because the data is not there in the traditional way of experimentation. But when it is it’ll be too late. 

Before Bernie Madoff got caught, before Hurricane Katrina and Fukushima devastated cities, and before ISIS formed, there was an expert for each one of those events warning people in power that it would happen. What did those powerful people do? Absolutely nothing. These experts are called 'Cassandras' in hindsight, because as global security expert Richard A. Clarke explains in a previous Big Think video: "Cassandra in Greek mythology was a woman cursed by the gods. The curse was that she could accurately see the future. It doesn’t sound so bad until you realize the second part of the curse, which was no one would ever believe her. And because she could see the future and no one was paying attention to her, she went mad." So how can we graduate from sheepishly identifying Cassandras in hindsight, to recognizing and acting on their real predictions before the impending chaos hits? It's tough because everyone and their uncle is trying to get in on the prediction game. Who can you trust? Fortunately, Clarke and his research partner R.P. Eddy have used case studies to build a detailed template of the four aspects that determine whether we can avoid a Cassandra event: the quality and personal traits of the Cassandra themselves, the reaction of the audience or decision makers in power, the nature of the predicted event (is it too ridiculous to believe?), and the critics of the Cassandra. Even today, there are potential Cassandras predicting events that could be catastrophic to humanity this century. Can we learn from our mistakes in time? Richard A. Clarke and R.P. Eddy's new book is Warnings: Finding Cassandras to Stop Catastrophes.


Are we really addicted to technology?

Fear that new technologies are addictive isn't a modern phenomenon.

Credit: Rodion Kutsaev via Unsplash
Technology & Innovation

This article was originally published on our sister site, Freethink, which has partnered with the Build for Tomorrow podcast to go inside new episodes each month. Subscribe here to learn more about the crazy, curious things from history that shaped us, and how we can shape the future.

In many ways, technology has made our lives better. Through smartphones, apps, and social media platforms we can now work more efficiently and connect in ways that would have been unimaginable just decades ago.

But as we've grown to rely on technology for a lot of our professional and personal needs, most of us are asking tough questions about the role technology plays in our own lives. Are we becoming too dependent on technology to the point that it's actually harming us?

In the latest episode of Build for Tomorrow, host and Entrepreneur Editor-in-Chief Jason Feifer takes on the thorny question: is technology addictive?

Popularizing medical language

What makes something addictive rather than just engaging? It's a meaningful distinction because if technology is addictive, the next question could be: are the creators of popular digital technologies, like smartphones and social media apps, intentionally creating things that are addictive? If so, should they be held responsible?

To answer those questions, we've first got to agree on a definition of "addiction." As it turns out, that's not quite as easy as it sounds.

If we don't have a good definition of what we're talking about, then we can't properly help people.

LIAM SATCHELL UNIVERSITY OF WINCHESTER

"Over the past few decades, a lot of effort has gone into destigmatizing conversations about mental health, which of course is a very good thing," Feifer explains. It also means that medical language has entered into our vernacular —we're now more comfortable using clinical words outside of a specific diagnosis.

"We've all got that one friend who says, 'Oh, I'm a little bit OCD' or that friend who says, 'Oh, this is my big PTSD moment,'" Liam Satchell, a lecturer in psychology at the University of Winchester and guest on the podcast, says. He's concerned about how the word "addiction" gets tossed around by people with no background in mental health. An increased concern surrounding "tech addiction" isn't actually being driven by concern among psychiatric professionals, he says.

"These sorts of concerns about things like internet use or social media use haven't come from the psychiatric community as much," Satchell says. "They've come from people who are interested in technology first."

The casual use of medical language can lead to confusion about what is actually a mental health concern. We need a reliable standard for recognizing, discussing, and ultimately treating psychological conditions.

"If we don't have a good definition of what we're talking about, then we can't properly help people," Satchell says. That's why, according to Satchell, the psychiatric definition of addiction being based around experiencing distress or significant family, social, or occupational disruption needs to be included in any definition of addiction we may use.

Too much reading causes... heat rashes?

But as Feifer points out in his podcast, both popularizing medical language and the fear that new technologies are addictive aren't totally modern phenomena.

Take, for instance, the concept of "reading mania."

In the 18th Century, an author named J. G. Heinzmann claimed that people who read too many novels could experience something called "reading mania." This condition, Heinzmann explained, could cause many symptoms, including: "weakening of the eyes, heat rashes, gout, arthritis, hemorrhoids, asthma, apoplexy, pulmonary disease, indigestion, blocking of the bowels, nervous disorder, migraines, epilepsy, hypochondria, and melancholy."

"That is all very specific! But really, even the term 'reading mania' is medical," Feifer says.

"Manic episodes are not a joke, folks. But this didn't stop people a century later from applying the same term to wristwatches."

Indeed, an 1889 piece in the Newcastle Weekly Courant declared: "The watch mania, as it is called, is certainly excessive; indeed it becomes rabid."

Similar concerns have echoed throughout history about the radio, telephone, TV, and video games.

"It may sound comical in our modern context, but back then, when those new technologies were the latest distraction, they were probably really engaging. People spent too much time doing them," Feifer says. "And what can we say about that now, having seen it play out over and over and over again? We can say it's common. It's a common behavior. Doesn't mean it's the healthiest one. It's just not a medical problem."

Few today would argue that novels are in-and-of-themselves addictive — regardless of how voraciously you may have consumed your last favorite novel. So, what happened? Were these things ever addictive — and if not, what was happening in these moments of concern?

People are complicated, our relationship with new technology is complicated, and addiction is complicated — and our efforts to simplify very complex things, and make generalizations across broad portions of the population, can lead to real harm.

JASON FEIFER HOST OF BUILD FOR TOMORROW

There's a risk of pathologizing normal behavior, says Joel Billieux, professor of clinical psychology and psychological assessment at the University of Lausanne in Switzerland, and guest on the podcast. He's on a mission to understand how we can suss out what is truly addictive behavior versus what is normal behavior that we're calling addictive.

For Billieux and other professionals, this isn't just a rhetorical game. He uses the example of gaming addiction, which has come under increased scrutiny over the past half-decade. The language used around the subject of gaming addiction will determine how behaviors of potential patients are analyzed — and ultimately what treatment is recommended.

"For a lot of people you can realize that the gaming is actually a coping (mechanism for) social anxiety or trauma or depression," says Billieux.

"Those cases, of course, you will not necessarily target gaming per se. You will target what caused depression. And then as a result, If you succeed, gaming will diminish."

In some instances, a person might legitimately be addicted to gaming or technology, and require the corresponding treatment — but that treatment might be the wrong answer for another person.

"None of this is to discount that for some people, technology is a factor in a mental health problem," says Feifer.

"I am also not discounting that individual people can use technology such as smartphones or social media to a degree where it has a genuine negative impact on their lives. But the point here to understand is that people are complicated, our relationship with new technology is complicated, and addiction is complicated — and our efforts to simplify very complex things, and make generalizations across broad portions of the population, can lead to real harm."

Behavioral addiction is a notoriously complex thing for professionals to diagnose — even more so since the latest edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), the book professionals use to classify mental disorders, introduced a new idea about addiction in 2013.

"The DSM-5 grouped substance addiction with gambling addiction — this is the first time that substance addiction was directly categorized with any kind of behavioral addiction," Feifer says.

"And then, the DSM-5 went a tiny bit further — and proposed that other potentially addictive behaviors require further study."

This might not sound like that big of a deal to laypeople, but its effect was massive in medicine.

"Researchers started launching studies — not to see if a behavior like social media use can be addictive, but rather, to start with the assumption that social media use is addictive, and then to see how many people have the addiction," says Feifer.

Learned helplessness

The assumption that a lot of us are addicted to technology may itself be harming us by undermining our autonomy and belief that we have agency to create change in our own lives. That's what Nir Eyal, author of the books Hooked and Indistractable, calls 'learned helplessness.'

"The price of living in a world with so many good things in it is that sometimes we have to learn these new skills, these new behaviors to moderate our use," Eyal says. "One surefire way to not do anything is to believe you are powerless. That's what learned helplessness is all about."

So if it's not an addiction that most of us are experiencing when we check our phones 90 times a day or are wondering about what our followers are saying on Twitter — then what is it?

"A choice, a willful choice, and perhaps some people would not agree or would criticize your choices. But I think we cannot consider that as something that is pathological in the clinical sense," says Billieux.

Of course, for some people technology can be addictive.

"If something is genuinely interfering with your social or occupational life, and you have no ability to control it, then please seek help," says Feifer.

But for the vast majority of people, thinking about our use of technology as a choice — albeit not always a healthy one — can be the first step to overcoming unwanted habits.

For more, be sure to check out the Build for Tomorrow episode here.

Why the U.S. and Belgium are culture buddies

The Inglehart-Welzel World Cultural map replaces geographic accuracy with closeness in terms of values.

Credit: World Values Survey, public domain.
Strange Maps
  • This map replaces geography with another type of closeness: cultural values.
  • Although the groups it depicts have familiar names, their shapes are not.
  • The map makes for strange bedfellows: Brazil next to South Africa and Belgium neighboring the U.S.
Keep reading Show less

CT scans of shark intestines find Nikola Tesla’s one-way valve

Evolution proves to be just about as ingenious as Nikola Tesla

Credit: Gerald Schömbs / Unsplash
Surprising Science
  • For the first time, scientists developed 3D scans of shark intestines to learn how they digest what they eat.
  • The scans reveal an intestinal structure that looks awfully familiar — it looks like a Tesla valve.
  • The structure may allow sharks to better survive long breaks between feasts.
Keep reading Show less

Mammals dream about the world they are entering even before birth

A study finds that baby mammals dream about the world they are about to experience to prepare their senses.

Michael C. Crair et al, Science, 2021.
Surprising Science
  • Researchers find that babies of mammals dream about the world they are entering.
  • The study focused on neonatal waves in mice before they first opened their eyes.
  • Scientists believe human babies also prime their visual motion detection before birth.
Keep reading Show less
Quantcast