Skip to content
Culture & Religion

Better-Than-Human AI Would Undoubtedly Eradicate Us, with Michael Vassar

The futurist and entrepreneur takes an analytic approach to assessing the existential risks inherent in pursuing artificial intelligence.
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Futurist and entrepreneur Michael Vassar doesn’t mince words when discussing the potential for greater-than-human AI to destroy humanity. And why would he? As he explains in today’s featured Big Think interview, it’s an obvious conclusion when thought about analytically:


“If greater-than-human artificial general intelligence is invented without due caution, it is all but certain that the human species will be extinct in very short order. People have written extensively about why it is basically analytically compulsory to conclude that in the default scenario, in the absence of major surprising developments or concerted integrated effort in the long term, artificial intelligence will replace humanity.”

Vassar draws from much of the same source material as Elon Musk and Stephen Hawking, who made headlines earlier this year when they co-signed an open letter warning of the existential risk of developing advanced AI. The most notable is the work of Nick Bostrom of Oxford University, in particular the book Superintelligence. Vassar admires Bostrom’s work, but notes that many of the ideas espoused in Superintelligence were things Bostrom had written intelligently about as many as 20 years ago. It took two decades for Bostrom to rise up the ranks of his profession and become director of Oxford’s Future of Humanity Institute before the scholarly world took him seriously. His ideas were no less valid back then. All he lacked, Vassar says incredulously, was prestige.

Thus, the greatest threat to humanity according to Vassar isn’t AI eradication, but rather the absence of some sort of structure that elevates analytically sound ideas in a swift manner:

“As far as I can tell, most of the problems that humanity faces now and in the future are not going to be analytically tractable and analytically compelling the way risk from AI is analytically tractable and analytically compelling. Risks associated with biotechnologies, risks associated with economic issues — these sorts of risks are a lot less likely to cause human extinction within a few years than AI. But they are more immediate and they are much, much, much more complicated.”

An example of what he’s talking about would be something like an astronomer who lacks “prestige” spotting an asteroid set to hit Earth in 12 years and isn’t taken seriously until 2025 (and thus, perhaps too late) after she becomes director of something or another. Better-than-human AI is merely one avenue toward eventual human extinction, explains Vassar. And if it’s the avenue that comes to fruition, it won’t be the AI that’s to blame.

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Related
The Dr. Data Show is a new web series that breaks the mold for data science infotainment, captivating the planet with short webisodes that cover the very best of machine learning and predictive analytics.

Up Next