Skip to content

What If Everything Went Straight to Hell?

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

A Q&A With Dr. John L. Casti, author, X-Events: The Collapse of Everything


Dr. John L. Casti is a complexity scientist. This is one of those job descriptions I would love to flaunt on business cards and over cocktails:

Hostess: And what does that involve, exactly?

Me: Yes, well. [Frowning into middle distance] It’s a bit complex.

In his study of complex systems, Casti has reached a grim conclusion: civilization as we know it, having been wired and webbed and linked down to the last aboriginal tribeschild, is now patiently waiting for the hand of fate to reach in, snip the wrong wire, and blow the whole apparatus into the stratosphere.

Since my panic dreams concluded the same thing years ago, I was delighted when Dr. Casti’s publicist contacted me recently about an author interview. Rarely does a pitch speak so intimately to my private anxieties. Will we all die from a worldwide computer virus, a worldwide actual virus, or both? These and other questions are addressed in Casti’s new book, X-Events: The Collapse of Everything (William Morrow/HarperCollins), but this week Casti was kind enough to answer a few questions directly.

Q: What are X-events, and how do you attempt to forecast them as a complexity scientist?

A: First of all, let me say that I do not believe that there is any person, method, or tool that can consistently and reliably predict specific human events, X- or otherwise. So my goal is not to predict the moment and/or location of the occurrence of any X-event. But what we actually see and call an “event” is a combination of two factors: chance and context. I believe is that we can forecast the “changing landscape of context,” and thus get insight into when we are entering the danger zone of an X-event. The chance part, of course, is totally beyond our ability to forecast, since by its very nature it is essentially random, i.e., has no pattern. But context is a different story. It is the biasing factor that conditions the random event to give rise to one sort of outcome as opposed to another from the space of all as-yet-unrealized possibilities.

So how do we forecast context?

Each of my last two books, Mood Matters and X-Events, contains its own answer to this question. In MM, I focus on what I call the “social mood,” the beliefs (NOTE: not feelings, but beliefs) that a group, society, population holds about its future. If the group is optimistic about its future, believing that tomorrow will be better than today, then that biases the events that actually occur to be ones to which we would generally attach labels like “happy”, “joining,” “global,” “welcoming” and the like. If the group has a negative social mood, believing that tomorrow will be worse than today, the bias goes in the opposite direction. Instead of “welcoming” we have “rejecting,” instead of “global” we tend to see events that are “local” and so forth.

To make use of this idea, we need a way of measuring the social mood. And, of course, this mood exists on many time scales, since you may feel optimistic about next week, but pessimistic about next year. So whatever “sociometer” you choose, it must be able to distinguish among these many time scales.

The sociometer I use in Mood Matters follow the lead of financial guru and social theorist Robert Prechter, who advocates using the financial market index as a vehicle for characterizing the social mood of a population. The reasons are explained in great detail in the book. I hasten to note that a market index like the S&P500 is by no means the only tool one might employ. But it works reasonably well and is easy to obtain, as you’ll see illustrated by dozens of examples in the book.

In my most recent book X-Events, I argue that human-caused extreme events ranging from political revolutions to financial market meltdowns to a crash of the Internet all stem from the very same source: a complexity overload/mismatch in the system. In short, there’s too much complexity chasing too little understanding, along with too large a gap between the complexity in the systems intended to regulate the target system and that system itself. Let me give an example to hammer home the point.

To oversimplify a bit, the global financial system consists of firms in the financial services sector—banks, hedge funds, insurance companies and the like—and various governmental agencies who are charged with regulating these firms. From the 1990s onward, the financial sector created a vast array of instruments designed to separate investors from their money, financial derivatives of an ever-increasing level of complexity. At some point, this complexity reached a point where even the creators of the derivatives themselves didn’t understand them. At the same time, the complexity of the regulating bodies was pretty much frozen in place. So as the gap widened between the heightened complexity of the financial sector and the static level of the regulators, the gap grew to an unsustainable level and a crash was necessary to narrow it.

A good analogy here is stretching a rubber band. You can stretch and stretch and even feel the tension increase in the muscles in your hands and arms as the gap from one end of the band to the other widens. But at some point you reach the limits of elasticity of the band and it snaps. The same thing happens with human systems. They reach their level of complexity tolerance and then they snap (read: crash). And there are only two ways to avoid this crash. The higher-complexity system must voluntarily downsize, which virtually never happens since humans have a congenital fear of losing what they’ve attained if they downsize. Or the low-complexity system must “upsize,” another phenomenon that almost never happens, most because the high-complexity side almost always sees such an upsizing as its loss in a zero-sum game.

The end result here is that by measuring this complexity gap you can get a good sense of when the likelihood of a crash is imminent. Exactly how to measure this gap is an active research topic at The X-Center, a new research institute I established in Vienna earlier this year.

Q: The fear that global interdependence spells catastrophe is an old one—Robinson Jeffers wrote 75 years ago that “there is no escape” from the “mass disasters” it will bring. Why do you believe the danger of such X-events is greater than ever?

A: In the opening section of X-Events I liken modern society to a house of cards, where the layers of cards correspond to higher and higher levels of social and technological infrastructure needed to sustain our current post-industrialized way of life. My view is that we are reaching a point where the number of layers has grown to the point that almost all the resources of our economies are being consumed in simply maintaining the current structure. So when the next big problem comes online, be it the Euro crisis, nuclear proliferation, an overstretched Internet, a killer flu, or any of the other possibilities I consider in X-Events, we will suffer a complexity overload. At that point, the whole intimately intertwined structure comes tumbling down just like a house of cards.

Why now, you ask? I think the answer is clear. The process of globalization has now interconnected almost everything ranging from financial markets to transport networks to communication systems in a huge system that no one really understands. System theorists know that it’s easy to couple simple-to-understand systems into a “super system” that’s capable of displaying behavioral modes that cannot be seen in any of its constituent parts. This is the process called “emergence.” And contrary to the seeming beliefs of evangelists of globalization like Thomas Friedman, there is no guarantee that bigger will always be better. There is also no guarantee that the emergent properties of a highly-interconnected system will not cause the entire system to self-destruct. This is why I’m concerned right now about the rush to globalize. We don’t want to do with the global systems we depend upon for everyday life what bankers did by creating financial systems they didn’t understand and then saw the entire system crash back to what’s headed for a pre-industrial level.

Q: Which of the various doomsday scenarios you outline in your book do you consider most plausible?

A: To begin with, let me say that I’m not sure “plausible” is really the right word here. All eleven candidate X-events presented in Part II of X-Events are certainly plausible; in fact, the story I tell in each of those chapters is aimed at saying how the event might happen, what it’s impact is likely to be on our way of life if it does occur, and what steps we might take today to ensure that we are a survivor, if not a beneficiary of the event, at least in the longer-term perspective. So I regard each of the eleven X-events as “plausible.” But that does not mean I regard each of them as equally likely. In fact, the very nature of an X-event is that it is both rare and surprising. So I would not say that any specific X-event is likely. What I would say, though, is that some X-event is not only plausible, but very likely in a time scale of a few years.

When it comes to likelihood, we must bear in mind the timeframe. Is the event likely to take place tomorrow? Next month? Next decade? Or…?? Each of the eleven scenarios in my book (and I have another dozen or more still sitting in my computer) revolves about an X-event that has a natural unfolding time. That time is very short for an electromagnetic pulse or a terrorist-driven nuclear attack, maybe even just a few minutes or even a few seconds. On the other hand, the unfolding time for the end of globalization or a worldwide deflation is much longer, certainly measured in years, if not decades.

So which of the eleven X-events do I regard as most likely to take place? Bearing the foregoing caveats in mind, I’d say the most likely is a global deflation. I regard this X-event as almost certain to unfold within the next decade, if not two or three years. The world is awash in more debt than there is money enough in the world to liquidate it. Trying to solve the problem by creating more debt is analogous to trying to stop being an alcoholic by going on a bender down at the corner bar. It simply ain’t going to happen that way. At some point, the world is going to have to bite the bullet and accept a huge downsizing in its way of life to bring the assets-to-debt ratio back in touch with reality.

If you ask which of the scenarios I think is most dangerous, though, I will give a different answer. In that form of the question, I regard a nuclear attack, terrorist-generated or otherwise, as the most threatening combination of likelihood and long-term damage to modern life today.

Q: You go on record in the book as believing the Singularity (superhuman or transhuman intelligence) will occur. Granted that this would be a disruptive event, do you believe it would ultimately be catastrophic or beneficial?

A: This is a fascinating question. I think that in the immediate aftermath of a superhuman machine intelligence revealing itself, most people would feel very threatened but take solace in the thought that we can always pull the plug. Of course, no such intelligence is going to come out of the box, so to speak, without having first realized that we would feel this way and taken steps to block any such ham-handed effort to shut it down. So the real question is how we will feel, once we realize that the new kid in town is here to stay.

Once the reality sets in of a superhuman intelligence being in control of every aspect of the infrastructures we rely upon for everyday life, we will simply have to try to come to an accommodation with that entity. My own guess is that quite quickly the machine intelligence will start dreaming machine dreams and thinking machine thoughts, both of which would totally incomprehensible to us. This would then lead to each species, we and the machines, moving off on to its own separate life trajectory. Essentially, we would be sharing the same physical environment but following mutually incomprehensible life activities. This situation would be much like what already exists today between we humans and, say, a colony of termites or ants. The two of us pretty comfortably coexist as long as we don’t get in each other’s way, although I think it’s safe to assume that neither species really has much idea or concern about what the other is doing.

If things follow this scenario, I don’t think the emergence of a superhuman intelligence would be at all catastrophic but much more likely to be beneficial—just as long as we don’t start trying to interfere with it! If that were to happen, though, then life for us humans could get very unpleasant, very quickly. For a great read that provides one account as to what might happen, let me close by recommending the novelette “Golem XIV” by Stanislaw Lem, which appears in his book Imaginary Magnitude (Harcourt, San Diego, 1984).

[Image via HarperCollins.]

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Related

Up Next
I’m home again, safe and sound. As I mentioned, this weekend I was in Columbus, Ohio at the 2012 Secular Student Alliance Leadership Conference, having a blast with some of […]