Michael Vassar: Unchecked AI Will Bring On Human Extinction

Futurist Michael Vassar explains why it makes perfect sense to conclude that the creation of greater-than-human AI would doom humanity.

Michael Vassar: If greater-than-human artificial general intelligence is invented without due caution, it is all but certain that the human species will be extinct in very short order. People have written extensively about why it is basically analytically compulsory to conclude that in the default scenario, in the absence of major surprising developments or concerted integrated effort, in the long term artificial intelligence will replace humanity.

It’s the natural all but inevitable consequence of greater-than-human artificial intelligence that it ought to develop what Steve Omohundro has called basic AI drives and basic AI drives basically boils down to properties of any goal-directed system. The obedience to the Von Neumann-Morgenstern decision theory suggests that one ought to do the things that you expect to have the best outcomes based on some value function. And that value function uniquely specifies some configuration of matter in the universe. And unless the value function that is built into an AI implicitly uniquely specifies a configuration of matter in the universe that conforms to our values, which would require a great deal of planning to make that happen, then given sufficient power, we should expect an AI to reconfigure the universe in a manner that does not preserve our values. As far as I can tell, this position is analytically compelling. It’s not a position that a person can intelligently, honestly, and reasonably be uncertain about.

Therefore, I conclude that the major global catastrophic threat to humanity is not AI, but rather the absence of social, intellectual frameworks for people quickly and easily converging on analytically compelling conclusions. Nick Bostrom who recently wrote the book Superintelligence, was aware of the basic concerns associated with AI risk 20 years ago and wrote about them intelligently in a manner that ought to be sufficiently compelling to convince any thoughtful and open-minded person. By 10 years ago, practically everything that is said in Machine Intelligence had been developed intellectually into a form that a person who was more skeptical and not willing to think for themselves, but who was willing to listen to other people’s thoughts and merely critically scrutinize, ought to have been convinced by. But instead Bostrom had to spend 10 years more becoming the director of an incredibly prestigious institute and writing an incredibly rigorous, meticulous book in order to get a still tiny number of people and still a minority of the world — essentially most analytically capable people — onto the right page on a topic that is, from a philosophy perspective, about as difficult as Plato’s issues in The Republic, about how it’s possible for an object to be bigger than one thing and smaller than another even though bigness and smallness are opposites. We are talking about completely trivial conclusions and we are talking about the world’s greatest minds failing to adopt these conclusions when they are laid out analytically until an enormous body of prestige is placed behind them.

And as far as I can tell, most of the problems that humanity faces now and in the future are not going to be analytically tractable and analytically compelling the way risk from AI is analytically tractable and analytically compelling.  Risks associated with biotechnologies, risks associated with economic issues — these sorts of risks are a lot less likely to cause human extinction within a few years than AI. But they are more immediate and they are much, much, much more complicated. The technical difficulty of creating institutions that are capable of thinking about AI risk is so enormously high compared to the analytical abilities of existing institutions, demonstrated by existing institutions' failure to reach the trivially correct, easy conclusions about AI risk, that existing institutions are compellingly not qualified to think about these issues and ought not to do so. But it is a very high priority for humanity. I think in the long run, it is the highest priority for humanity that we create institutions that are capable of digesting and integrating both logical argument and empirical evidence and figuring out what things are important and true not just in trivial cases like AI, but in harder cases.

Directed/Produced by Jonathan Fowler, Elizabeth Rodd, and Dillon Fitton



Futurist Michael Vassar explains why it makes perfect sense to conclude that the creation of greater-than-human AI would doom humanity. The only thing that could save us is if due caution were observed and a framework installed to prevent such a thing from happening. Yet Vassar makes note that AI itself isn't the greatest risk to humanity. Rather, it's "the absence of social, intellectual frameworks" through which experts making key discoveries and drawing analytical conclusions can swiftly and convincingly communicate these ideas to the public.

A dark matter hurricane is crashing into Earth

Giving our solar system a "slap in the face"

Surprising Science
  • A stream of galactic debris is hurtling at us, pulling dark matter along with it
  • It's traveling so quickly it's been described as a hurricane of dark matter
  • Scientists are excited to set their particle detectors at the onslffaught
Keep reading Show less

Are we all multiple personalities of universal consciousness?

Bernardo Kastrup proposes a new ontology he calls “idealism” built on panpsychism, the idea that everything in the universe contains consciousness. He solves problems with this philosophy by adding a new suggestion: The universal mind has dissociative identity disorder.

We’re all one mind in "idealism." (Credit: Alex Grey)
Mind & Brain

There’s a reason they call it the “hard problem.” Consciousness: Where is it? What is it? No one single perspective seems to be able to answer all the questions we have about consciousness. Now Bernardo Kastrup thinks he’s found one. He calls his ontology idealism, and according to idealism, all of us and all we perceive are manifestations of something very much like a cosmic-scale dissociative identity disorder (DID). He suggests there’s an all-encompassing universe-wide consciousness, it has multiple personalities, and we’re them.

Keep reading Show less

New study reveals what time we burn the most calories

Once again, our circadian rhythm points the way.

Photo: Victor Freitas / Unsplash
Surprising Science
  • Seven individuals were locked inside a windowless, internetless room for 37 days.
  • While at rest, they burned 130 more calories at 5 p.m. than at 5 a.m.
  • Morning time again shown not to be the best time to eat.
Keep reading Show less