Skip to content
The Future

Elon Musk and Mark Zuckerberg Spar Over How Dangerous AI Really Is

Some experts take issue with Elon Musk’s frightening warning about AI taking over.
(FROGDNA)

One way to develop a reputation as a visionary is to come up with a well-known, startlingly prescient prediction that proves true. Another way is to gain immense wealth and fame through the development of a breakthrough product—say, PayPal—or two—maybe Tesla—or three—SpaceX—and then use your well-funded megaphone to cast prognostications so far and wide and so often that the world comes to simply accept you as someone who sees the future. Even better if you can start a public debate with other famous visionaries, say Facebook’s Mark Zuckerberg, Bill Gates, and Stephen Hawking. This is what Elon Musk has just done at the U.S National Governors Association meeting in July 2017.


Elon Musk (BRENDAN SMIALOWSKI)

Musk’s comments about artificial intelligence (AI) were startling and alarming, beginning with his assertion that “robots will do everything better than us.” “I have exposure to the most cutting-edge A.I.,” Musk said, “and I think people should be really concerned by it.”

His vision of the potential conflict is outright frightening: “I keep sounding the alarm bell but until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal.”

Musk’s pitch to the governors was partly about robots stealing jobs from humans, a concern we’ve covered on Big Think, and partly a Skynet scenario, with an emphasis on humanity’s weak odds of prevailing in the battle on the horizon. His point? ”A.I. is a rare case where I think we need to be proactive in regulation [rather] than be reactive.”

It was this dire tone that caused Facebook’s Mark Zuckerberg to take issue with Musk’s position when asked about it in a Facebook Live chat. “I think people who are naysayers and try to drum up these doomsday scenarios—I don’t understand it,” said Zuckerberg. “It’s really negative, and in some ways I think it’s pretty irresponsible.”


Mark Zuckerberg (JUSTIN SULLIVAN)

As CEO of Facebook, Zuckerberg is as cranium-deep into AI as Musk, but has a totally different take on it. “I’m really optimistic. Technology can always be used for good and bad, and you need to be careful about how you build it, and what you build, and how it’s going to be used. But people are arguing for slowing down the process of building AI—I just find that really questionable. I have a hard time wrapping my head around that.”

Musk tweeted his response.

I’ve talked to Mark about this. His understanding of the subject is limited.

— Elon Musk (@elonmusk) July 25, 2017

Oh, snap.

He’s not the only one discussing this on Twitter. AI experts chimed in to denounce Musk’s fear-mongering as not being a constructive contribution to the a calm, reasoned discussion of AI’s promises and potential hazards.

Pedro Domingos, of the University of Washington, put it most succinctly.

One word: sigh. https://t.co/qeSSM6PQ5h @recode @elonmusk

— Pedro Domingos (@pmddomingos) July 17, 2017

And let’s not forget about the imperfect humans who create AI in the first place.

AI/ML makes a few existing threats worse. Unclear that it creates any new ones.

— François Chollet (@fchollet) July 16, 2017

It’s not as if Musk is the only one concerned about the long-term dangers of AI—it’s more about his extreme way of talking about it. As Maureen Dowd noted in her March 2013 Vanity Fair piece, “Some in Silicon Valley argue that Musk is interested less in saving the world than in buffing his brand, and that he is exploiting a deeply rooted conflict: the one between man and machine, and our fear that the creation will turn against us.”

Smarter faster: the Big Think newsletter
Subscribe for counterintuitive, surprising, and impactful stories delivered to your inbox every Thursday

Be that as it may, some are not as sanguine as Zuckerberg about what awaits us down the road with AI.

Stephen Hawking, for one, has warned us to tread carefully before we bestow intelligence on machines, saying, "It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution,” Hawking said, “couldn't compete, and would be superseded." He’s also warned, “A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble.”

We do already know that AI has an odd, non-human way of thinking that even its programmers are having a hard time understanding. Will machines surprise us—even horrify us—with decisions no human would ever make?

Bill Gates has also expressed concerns: "I am in the camp that is concerned about super intelligence," Gates wrote during a Reddit “Ask Me Anything” session. "First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."


Bill Gates (ALEX WONG)

As to how the governors group took Musk’s warning, there’s some evidence to suggest his sheer star power may have overwhelmed some politicians. Colorado Governor John Hickenlooper, for example, told NPR, “You could have heard a pin drop. A couple of times he paused and it was totally silent. I felt like—I think a lot of us felt like—we were in the presence of Alexander Graham Bell or Thomas Alva Edison ... because he looks at things in such a different perspective.”


Related

Up Next