Skip to content
Who's in the Video
Dr. Michio Kaku is the co-founder of string field theory, and is one of the most widely recognized scientists in the world today. He has written 4 New York Times[…]

In mid-2017, Elon Musk spoke these words at a National Governors Association meeting and sparked what is now a famous A.I. debate between himself and Facebook CEO Mark Zuckerberg: “I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react.” Musk wants governors to legislate A.I. now, believing it to be an existential threat to humanity’s future and likening it to “summoning the demon.” Zuckerberg, on the other hand, called Musk’s predictions “pretty irresponsible” and made the case for A.I. as a tool to vastly improve people’s quality of life, adding that tech companies should not slow down. To which Musk tweeted: I’ve talked to Mark about this. His understanding of the subject is limited. And that’s the battle of the billionaires in a nutshell, a battle that has divided experts and pundits alike into many sides of an epic debate about the future of A.I. So where does theoretical physicist Michio Kaku stand? Kaku thinks both are right—Zuckerberg in the short term, and Musk in the long run. The tipping point from Team Zuck to Team Musk for Kaku is the moment A.I. achieves self-awareness, which he suspects could be at the end of this century. And what should we do then? “When robots become as intelligent as monkeys I think we should put a chip in their brain to shut them off if they begin to have murderous thoughts,” says Kaku. What do you think of Kaku’s take on the Musk vs. Zuckerberg A.I. debate and his solution? Michio Kaku’s latest book is the awesome The Future of Humanity: Terraforming Mars, Interstellar Travel, Immortality, and Our Destiny Beyond Earth

Michio Kaku: Recently, I was on the Richard Quest show on CNN TV and I was asked the question that we have the battle of the billionaires; on one hand we have Mark Zuckerberg saying, “Don’t worry, artificial intelligence will give us new jobs, new industries, create wealth, prosperity.” And then we have people like, well, Elon Musk, who says, “Watch out. They pose an existential threat to humanity.” Who knows, maybe one day they’ll put us in zoos and throw peanuts at us and make us dance, make us dance behind bars like we do with monkeys and with bears.

Well, my personal point of view is that both points of view are in some sense correct. In the short term, I think Zuckerberg is right. Artificial intelligence will open up whole new vistas, it will make life more convenient, things will be cheaper, new industries will be created. I personally think the A.I. industry will be bigger than the automobile industry.

In fact, I think the automobile is going to become a robot. You’ll talk to your car. You’ll argue with your car. Your car will give you the best facts, the best route between point A and point B; the car will be part of the robotics industry. Whole new industries involving the repair, maintenance, servicing of robots, not to mention robots that are software programs that you talk to and make life more convenient.

However, let’s not be naïve. There is a point, a tipping point at which they can become dangerous and pose an existential threat. And that tipping point is self-awareness.

You see, robots are not aware of the fact that they’re robots. They’re so stupid they simply carry out what they are instructed to do because they’re adding machines. We forget that. Adding machines don’t have a will. Adding machines simply do what you program them to do.

Now, of course, let’s not be naïve about this, eventually adding machines may be able to compute alternate goals and alternate scenarios when they realize that they are not human. Right now, robots do not know that.

However, there is a tipping point at which point they could become dangerous. Right now, our most advanced robot has the intelligence of a cockroach—a rather stupid cockroach.

However, it’s only a matter of time before robots become as smart as a mouse, then as smart as a rat, then a rabbit, then a cat, a dog, and eventually as smart as a monkey. Now, monkeys know they are not human. They have a certain amount of self-awareness. Dogs, especially young dogs, are not quite sure. One reason why dogs obey their masters is because they think the master is the top dog, and so they’re a little bit confused about whether or not we humans are part of the dog tribe. But monkeys, I think, have no problems with that; they know they’re not human.

So when robots become as intelligent as monkeys I think we should put a chip in their brain to shut them off if they begin to have murderous thoughts. When will that happen? I don’t know.

I suspect it will happen late in this century because I think we have decades of experience that we have to go through and learn before we can pose this particular problem.

So, in other words, I don’t think there’s any rush today to deal with killer robots that are going to destroy the human race and take over, but I think we have to keep one eye on the ball and realize that by the end of this century, when robots do become self-aware, we have to be careful.