Michael Vassar: Unchecked AI Will Bring On Human Extinction

Futurist Michael Vassar explains why it makes perfect sense to conclude that the creation of greater-than-human AI would doom humanity.

Michael Vassar: If greater-than-human artificial general intelligence is invented without due caution, it is all but certain that the human species will be extinct in very short order. People have written extensively about why it is basically analytically compulsory to conclude that in the default scenario, in the absence of major surprising developments or concerted integrated effort, in the long term artificial intelligence will replace humanity.

It’s the natural all but inevitable consequence of greater-than-human artificial intelligence that it ought to develop what Steve Omohundro has called basic AI drives and basic AI drives basically boils down to properties of any goal-directed system. The obedience to the Von Neumann-Morgenstern decision theory suggests that one ought to do the things that you expect to have the best outcomes based on some value function. And that value function uniquely specifies some configuration of matter in the universe. And unless the value function that is built into an AI implicitly uniquely specifies a configuration of matter in the universe that conforms to our values, which would require a great deal of planning to make that happen, then given sufficient power, we should expect an AI to reconfigure the universe in a manner that does not preserve our values. As far as I can tell, this position is analytically compelling. It’s not a position that a person can intelligently, honestly, and reasonably be uncertain about.

Therefore, I conclude that the major global catastrophic threat to humanity is not AI, but rather the absence of social, intellectual frameworks for people quickly and easily converging on analytically compelling conclusions. Nick Bostrom who recently wrote the book Superintelligence, was aware of the basic concerns associated with AI risk 20 years ago and wrote about them intelligently in a manner that ought to be sufficiently compelling to convince any thoughtful and open-minded person. By 10 years ago, practically everything that is said in Machine Intelligence had been developed intellectually into a form that a person who was more skeptical and not willing to think for themselves, but who was willing to listen to other people’s thoughts and merely critically scrutinize, ought to have been convinced by. But instead Bostrom had to spend 10 years more becoming the director of an incredibly prestigious institute and writing an incredibly rigorous, meticulous book in order to get a still tiny number of people and still a minority of the world — essentially most analytically capable people — onto the right page on a topic that is, from a philosophy perspective, about as difficult as Plato’s issues in The Republic, about how it’s possible for an object to be bigger than one thing and smaller than another even though bigness and smallness are opposites. We are talking about completely trivial conclusions and we are talking about the world’s greatest minds failing to adopt these conclusions when they are laid out analytically until an enormous body of prestige is placed behind them.

And as far as I can tell, most of the problems that humanity faces now and in the future are not going to be analytically tractable and analytically compelling the way risk from AI is analytically tractable and analytically compelling.  Risks associated with biotechnologies, risks associated with economic issues — these sorts of risks are a lot less likely to cause human extinction within a few years than AI. But they are more immediate and they are much, much, much more complicated. The technical difficulty of creating institutions that are capable of thinking about AI risk is so enormously high compared to the analytical abilities of existing institutions, demonstrated by existing institutions' failure to reach the trivially correct, easy conclusions about AI risk, that existing institutions are compellingly not qualified to think about these issues and ought not to do so. But it is a very high priority for humanity. I think in the long run, it is the highest priority for humanity that we create institutions that are capable of digesting and integrating both logical argument and empirical evidence and figuring out what things are important and true not just in trivial cases like AI, but in harder cases.

Directed/Produced by Jonathan Fowler, Elizabeth Rodd, and Dillon Fitton


 

 

Futurist Michael Vassar explains why it makes perfect sense to conclude that the creation of greater-than-human AI would doom humanity. The only thing that could save us is if due caution were observed and a framework installed to prevent such a thing from happening. Yet Vassar makes note that AI itself isn't the greatest risk to humanity. Rather, it's "the absence of social, intellectual frameworks" through which experts making key discoveries and drawing analytical conclusions can swiftly and convincingly communicate these ideas to the public.

How will AI and robotics transform jobs of the future?

We can either be fearful of artificial intelligence, or embrace it as a tool to help us improve service.

Videos
  • Artificial intelligence is already here and it has been taking care of mundane tasks and advising professionals of its findings to help improve service. For instance, doctors refer to A.I.'s findings on x-rays when developing treatment plans for patients.
  • In Latvia and China, artificial intelligence programs are already handling small claims in courts of law. This helps free up legal experts to focus on cases that transcend routine offenses.
  • Robotics is changing the manufacturing industry because drones and robots are increasingly capable of handling mundane work, monotonous jobs that many humans might find tiring.
Keep reading Show less

'The West' is, in fact, the world's biggest gated community

A review of the global "wall" that divides rich from poor.

Image: TD Architects
Strange Maps
  • Trump's border wall is only one puzzle piece of a global picture.
  • Similar anxieties are raising similar border defenses elsewhere.
  • This map shows how, as a result, "the West" is in fact one large gated community.
Keep reading Show less

Outer space capitalism: The legal and technical challenges facing the private space industry

The private sector may need the Outer Space Treaty to be updated before it can make any claims to celestial bodies or their resources.

Videos
  • The Outer Space Treaty, which was signed in 1967, is the basis of international space law. Its regulations set out what nations can and cannot do, in terms of colonization and enterprise in space.
  • One major stipulation of the treaty is that no nation can individually claim or colonize any part of the universe—when the US planted a flag on the Moon in 1969, it took great pains to ensure the world it was symbolic, not an act of claiming territory.
  • Essentially to do anything in space, as a private enterprise, you have to be able to make money. When it comes to asteroid mining, for instance, it would be "astronomically" expensive to set up such an industry. The only way to get around this would be if the resources being extracted were so rare you could sell them for a fortune on Earth.
Keep reading Show less