Skip to content
Surprising Science

AI Killswitch: Google’s Plan to Stop the Robot Apocalypse

Google’s DeepMind and the Future of Humanity Institute are trying to find a way for human operators to stay in control if artificial intelligence starts acting out. 

Machine intelligence continues to advance; in the span of a few years we’ve seen a robot write a book (with  human assistance) and become our personal drivers. As this technology continues to advance, many wonder how long it will take for artificial intelligence to outgrow us. 

As you might expect, pinning down a time frame is difficult.

Not too long ago, a journalist asked a group of the brightest minds in artificial intelligence when machines would takeover humanity — the room was divided in its response. Futurist Michio Kaku was one of the scientists in that room, and he recalled the conflicting dates:

“Among the top people assembled in one place the answers were anything from 20 years in the future to 1,000 years in the future—with some AI experts saying never. Some people put it at 2029,” he said. “2029, that’s going to be the moment of truth that one day a robot will wake up, wake up in the laboratory, look around and say, ‘I am aware. I’m just as smart as you. In fact, I could be even smarter if I put a few more chips in my brain.’”

Whether artificial intelligence will surpass us tomorrow or 1,000 years from now, preventing a robopocalypse is something that’s been on many great minds over the years.

The Bulletin of the Atomic Scientists put it on the list of potential catastrophes that will bring humanity to it’s knees. (It was quite low on the list.) Their worry was in how disruptive technological advancements have been going unchecked. It’s that classic Jurassic Park scene where Dr. Ian Malcolm confronts John Hammond about the fine line between can and should in science.

Hammond: Our scientists have done things which nobody’s ever done before…

Malcolm: Yeah, yeah, but your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.

AI can do great good for humanity, the near-future of autonomous cars has shown us that. But there’s a fear that some advancements will go unchecked. This notion drove Elon Musk, Stephen Hawking, and many others to pen an open letter about AI being used in warfare. There’s no doubt safety features need to be built-in—a killswitch, if you will.

This is what several authors from Google’s DeepMind, an AI research center, and Future of Humanity Institute, a multidisciplinary research group at the University of Oxford, have proposed in the recent paper Safely Interruptible Agents. The team combined mathematics, philosophy, and science to find a solution to stop AI agents from learning to prevent, or seeking to prevent, humans from taking control.

They write, it’s “unlikely [for AI] to behave optimally all the time.” For those times, engaging a killswitch might be necessary to prevent harm to humans or the AI.

“If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions — harmful either for the agent or for the environment — and lead the agent into a safer situation.”

The key will be to make sure these algorithms can be safely disrupted and the AI agents won’t learn how to manipulate this mechanism.

Smarter faster: the Big Think newsletter
Subscribe for counterintuitive, surprising, and impactful stories delivered to your inbox every Thursday

Sam Shead from Business Insider points out the partnership between DeepMind and the Future of Humanity Institute is an interesting one:

“DeepMind wants to ‘solve intelligence’ and create general purpose AIs, while the Future of Humanity Institute is researching potential threats to our existence.” 

This paper underscores the importance of considering the bigger picture when developing new technologies, and building in a killswitch in case things go awry.

***

Photo Credit: MARK RALSTON/AFP/Getty Images


Related

Up Next