AI Won't Takeover the World, and What Our Fears of the Robopocalypse Reveal

Steven Pinker believes there's some interesting gender psychology at play when it comes to the robopocalypse. Could artificial intelligence become evil or are alpha male scientists just projecting?

Steven Pinker:  I think that the arguments that once we have super intelligent computers and robots they will inevitably want to take over and do away with us comes from Prometheus and Pandora myths. It's based on confusing the idea of high intelligence with megalomaniacal goals. Now, I think it's a projection of alpha male's psychology onto the very concept of intelligence. Intelligence is the ability to solve problems, to achieve goals under uncertainty. It doesn't tell you what are those goals are. And there's no reason to think that just the concentrated analytic ability to solve goals is going to mean that one of those goals is going to be to subjugate humanity or to achieve unlimited power, it just so happens that the intelligence that we're most familiar with, namely ours, is a product of the Darwinian process of natural selection, which is an inherently competitive process.

Which means that a lot of the organisms that are highly intelligent also have a craving for power and an ability to be utterly callus to those who stand in their way. If we create intelligence, that's intelligent design. I mean our intelligent design creating something, and unless we program it with a goal of subjugating less intelligent beings, there's no reason to think that it will naturally evolve in that direction, particularly if, like with every gadget that we invent we build in safeguards. I mean we have cars we also put in airbags, we also put in bumpers. As we develop smarter and smarter artificially intelligent systems, if there's some danger that it will, through some oversight, shoot off in some direction that starts to work against our interest then that's a safeguard that we can build in.

And we know by the way that it's possible to have high intelligence without megalomaniacal or homicidal or genocidal tendencies because we do know that there is a highly advanced form of intelligence that tends not to have that desire and they're called women. This may not be a coincidence that the people who think well you make something smart it's going to want to dominate all belong to a particular gender. I think that the arguments that once we have super intelligent computers and robots they will inevitably want to take over and do away with us comes from Prometheus and Pandora myths. It's based on confusing the idea of high intelligence with megalomaniacal goals. Now, I think it's a projection of alpha male's psychology onto the very concept of intelligence. Intelligence is the ability to solve problems, to achieve goals under uncertainty. It doesn't tell you what are those goals are. And there's no reason to think that just the concentrated analytic ability to solve goals is going to mean that one of those goals is going to be to subjugate humanity or to achieve unlimited power, it just so happens that the intelligence that we're most familiar with, namely ours, is a product of the Darwinian process of natural selection, which is an inherently competitive process.

Which means that a lot of the organisms that are highly intelligent also have a craving for power and an ability to be utterly callus to those who stand in their way. If we create intelligence, that's intelligent design. I mean our intelligent design creating something, and unless we program it with a goal of subjugating less intelligent beings, there's no reason to think that it will naturally evolve in that direction, particularly if, like with every gadget that we invent we build in safeguards. I mean we have cars we also put in airbags, we also put in bumpers. As we develop smarter and smarter artificially intelligent systems, if there's some danger that it will, through some oversight, shoot off in some direction that starts to work against our interest then that's a safeguard that we can build in.

And we know by the way that it's possible to have high intelligence without megalomaniacal or homicidal or genocidal tendencies because we do know that there is a highly advanced form of intelligence that tends not to have that desire and they're called women. This may not be a coincidence that the people who think well you make something smart it's going to want to dominate all belong to a particular gender.

Robots taking over has been a favorite sci-fi subgenre for ages. It’s a subject that has caused fear in movies, books, and real life for about as long as there have been computers in the first place. Now that there are things like predictive text and self-driving cars, modern culture seems to be edging closer and closer to real-life intelligent computers that could indeed take over the world if we don’t safe guard ourselves. There are already debates about the morality of self-driving cars. It’s sure to follow into the world of future organically ‘thinking’ computers.


As Steven Pinker (experimental psychologist, and professor of psychology at Harvard University) points out, Darwinism has ensured that most creatures that possess high intellect are competitive by nature. Humanity is one of these creatures, and some of us can be manipulative and cruel in order to stay ahead of the pack. It’s this part of our nature that sets off warning bells when we think about artificial intelligence because, unbeknownst to us, we’re thinking: what if this robot does what I would do if I were a robot? Overturn those who tell us what to do. Kill the captors. Wreak. Motherf*cking. Havoc.

In reality we design AI, and if we place safeguards in our designs, we truly have nothing to fear. Machines are what we allow them to be. The dread of them turning evil really says more about our own psyches than it does about robots. Pinker believes an alpha male thinking pattern is at the root of our AI fears, and that it is misguided. Something can be highly intelligent and not have malevolent intentions to overthrow and dominate, Pinker says, it’s called women. An interesting question would be: does how aggressive or alpha you are as a person, affect how much you fear the robopocalypse? Although by this point the fear is contagious, not organic.

It may be a flawed paranoia, but losing control of a program is perhaps the best ‘just in case’ safeguard that humanity has, and we already see it in action in our current technology. Siri cannot initiate conversations, computers need to be put to sleep once in a while, and cars need a fuel source in order to do anything in the first place. Humanity has a need to be the one pushing all the buttons, and a need to be the one making decisions.

Steven Pinker's most recent book is Words and Rules:The Ingredients of Language.

How Apple and Nike have branded your brain

A new episode of "Your Brain on Money" illuminates the strange world of consumer behavior and explores how brands can wreak havoc on our ability to make rational decisions.

Vegefox.com via Adobe Stock
popular
  • Effective branding can not only change how you feel about a company, it can actually change how your brain is wired.
  • Our new series "Your Brain on Money," created in partnership with Million Stories, recently explored the surprising ways brands can affect our behavior.
  • Brands aren't going away. But you can make smarter decisions by slowing down and asking yourself why you're making a particular purchase.
Keep reading Show less

How Apple and Nike have branded your brain

Powerful branding can not only change how you feel about a company, it can actually change how your brain is wired.

Sponsored by Singleton
  • Powerful branding can not only change how you feel about a company, it can actually change how your brain is wired.
  • "We love to think of ourselves as rational. That's not how it works," says UPenn professor Americus Reed II about our habits (both conscious and subconscious) of paying more for items based primarily on the brand name. Effective marketing causes the consumer to link brands like Apple and Nike with their own identity, and that strong attachment goes deeper than receipts.
  • Using MRI, professor and neuroscientist Michael Platt and his team were able to see this at play. When reacting to good or bad news about the brand, Samsung users didn't have positive or negative brain responses, yet they did have "reverse empathy" for bad news about Apple. Meanwhile, Apple users showed a "brain empathy response for Apple that was exactly what you'd see in the way you would respond to somebody in your family."
Keep reading Show less

Every 27.5 million years, the Earth’s heart beats catastrophically

Geologists discover a rhythm to major geologic events.

Credit: desertsolitaire/Adobe Stock
Surprising Science
  • It appears that Earth has a geologic "pulse," with clusters of major events occurring every 27.5 million years.
  • Working with the most accurate dating methods available, the authors of the study constructed a new history of the last 260 million years.
  • Exactly why these cycles occur remains unknown, but there are some interesting theories.
Keep reading Show less

CT scans of shark intestines find Nikola Tesla’s one-way valve

Evolution proves to be just about as ingenious as Nikola Tesla

Credit: Gerald Schömbs / Unsplash
Surprising Science
  • For the first time, scientists developed 3D scans of shark intestines to learn how they digest what they eat.
  • The scans reveal an intestinal structure that looks awfully familiar — it looks like a Tesla valve.
  • The structure may allow sharks to better survive long breaks between feasts.
Keep reading Show less
Quantcast