Skip to content
13.8

The future of humanity: can we avert disaster?

Climate change and artificial intelligence pose substantial — and possibly existential — problems for humanity to solve. Can we?

Credit: stokkete / 223237936 via Adobe Stock

Key Takeaways
  • Just by living our day-to-day lives, we are walking into a disaster.
  • Can humanity wake up to avert disaster?
  • Perhaps COVID was the wake-up call we all needed.
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Does humanity have a chance for a better future, or are we just unable to stop ourselves from driving off a cliff? This was the question that came to me as I participated in a conference entitled The Future of Humanityhosted by Marcelo’s Institute for Cross-Disciplinary Engagement. The conference hosted an array of remarkable speakers, some of whom were hopeful about our chances and some less so. But when it came to the dangers facing our project of civilization, two themes appeared in almost everyone’s talks.

And here’s the key aspect that unifies those dangers: we are doing it to ourselves.

The problem of climate change

The first existential crisis that was discussed was, as you might guess, climate change. Bill McKibben, the journalist and now committed activist who first began documenting the climate crisis as far back as the 1980s, gave us a history of humanity’s inability to marshal action even in the face of mounting scientific evidence. He spoke of the massive, well-funded disinformation efforts paid for by the fossil fuel industry to keep that action from being taken because it would hurt their bottom lines.

It’s not like some alien threat has arrived and will use a mega-laser to drive the Earth’s climate into a new and dangerous state. Nope, it’s just us — flying around, using plastic bottles, and keeping our houses toasty in the winter.

Next Elizabeth Kolbert, one of America’s finest non-fiction writers, gave a sobering portrait of the state of efforts that attempt to deal with climate change through technological fixes. Based on her wonderful new book, she looked at the problem of control when it comes to people and the environment. She spoke of how often we get into trouble when we try to exert control over things like rivers or animal populations only to find that these efforts go awry due to unintended consequences. This requires new layers of control which, in turn, follow the same path.

Credit: Jo-Anne McArthur via Unsplash

At the end of the talk, she focused on attempts to deal with climate change through new kinds of environmental controls with the subtext being that we are likely to run into the same cycle of unintended consequences and attempts to repair the damage. In a question-and-answer period following her talk, Kolbert was decidedly not positive about the future. Because she had looked so deeply into the possibilities of using technology to get us out of the climate crisis, she was dubious that a tech fix was going to save us. The only real action that will matter, she said, is masses of people in the developed would reducing their consumption. She didn’t see that happening anytime soon.

The problem of artificial intelligence

Another concern was over artificial intelligence. Here the concern was not so much existential. By this, I mean the speakers were not fearful that some computer was going to wake up into consciousness and decide that the human race needed to be enslaved. Instead, the danger was more subtle but no less potent. Susan Halpern, also one of our greatest non-fiction writers, gave an insightful talk that focused on the artificial aspect of artificial intelligence. Walking us through numerous examples of how “brittle” machine learning algorithms at the heart of modern AI systems are, Halpern was able to pinpoint how these systems are not intelligent at all but carry all the biases of their makers (often unconscious ones). For example, facial recognition algorithms can have a hard time differentiating the faces of women of color, most likely because the “training data sets” the algorithms were taught were not representative of these human beings. But because these machines supposedly rely on data and “data don’t lie,” these systems get deployed into everything from making decisions about justice to making decisions about who gets insurance. And these are decisions that can have profound effects on people’s lives.

Then there was the general trend of AI being deployed in the service of both surveillance capitalism and the surveillance state. In the former, your behavior is always being watched and used against you in terms of swaying your purchasing decisions; in the latter, you are always being watched by those in power. Yikes!

The banality of danger

In listening to these talks I was struck by how mundane the sources of these dangers were when it comes to day-to-day life. Unlike nuclear war or some lone terrorist building a super-virus (threats that Sir Martin Rees eloquently spoke of), when it comes to the climate crisis and an emerging surveillance culture, we are collectively doing it to ourselves through our own innocent individual actions. It’s not like some alien threat has arrived and will use a mega-laser to drive the Earth’s climate into a new and dangerous state. Nope, it’s just us — flying around, using plastic bottles, and keeping our houses toasty in the winter. And it’s not like soldiers in black body armor arrive at our doors and force us to install a listening device that tracks our activities. Nope, we willingly set them up on the kitchen counter because they are so dang convenient. These threats to our existence or to our freedoms are things that we are doing just by living our lives in the cultural systems we were born into. And it would take considerable effort to untangle ourselves from these systems.

So, what’s next then? Are we simply doomed because we can’t collectively figure out how to build and live with something different? I don’t know. It’s possible that we are doomed. But I did find hope in the talk given by the great (and my favorite) science fiction writer Kim Stanley Robinson. He pointed to how different eras have different “structures of feeling,” which is the cognitive and emotional background of an age. Robinson looked at some positive changes that emerged in the wake of the COVID pandemic, including a renewed sense that most of us recognize that we’re all in this together. Perhaps, he said, the structure of feeling in our own age is about to change.

Let us hope, and where we can, let us act.

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Related

Up Next