Self-Motivation
David Goggins
Former Navy Seal
Career Development
Bryan Cranston
Actor
Critical Thinking
Liv Boeree
International Poker Champion
Emotional Intelligence
Amaryllis Fox
Former CIA Clandestine Operative
Management
Chris Hadfield
Retired Canadian Astronaut & Author
Learn
from the world's big
thinkers
Start Learning

AI Won't Takeover the World, and What Our Fears of the Robopocalypse Reveal

Steven Pinker believes there's some interesting gender psychology at play when it comes to the robopocalypse. Could artificial intelligence become evil or are alpha male scientists just projecting?

Steven Pinker:  I think that the arguments that once we have super intelligent computers and robots they will inevitably want to take over and do away with us comes from Prometheus and Pandora myths. It's based on confusing the idea of high intelligence with megalomaniacal goals. Now, I think it's a projection of alpha male's psychology onto the very concept of intelligence. Intelligence is the ability to solve problems, to achieve goals under uncertainty. It doesn't tell you what are those goals are. And there's no reason to think that just the concentrated analytic ability to solve goals is going to mean that one of those goals is going to be to subjugate humanity or to achieve unlimited power, it just so happens that the intelligence that we're most familiar with, namely ours, is a product of the Darwinian process of natural selection, which is an inherently competitive process.

Which means that a lot of the organisms that are highly intelligent also have a craving for power and an ability to be utterly callus to those who stand in their way. If we create intelligence, that's intelligent design. I mean our intelligent design creating something, and unless we program it with a goal of subjugating less intelligent beings, there's no reason to think that it will naturally evolve in that direction, particularly if, like with every gadget that we invent we build in safeguards. I mean we have cars we also put in airbags, we also put in bumpers. As we develop smarter and smarter artificially intelligent systems, if there's some danger that it will, through some oversight, shoot off in some direction that starts to work against our interest then that's a safeguard that we can build in.

And we know by the way that it's possible to have high intelligence without megalomaniacal or homicidal or genocidal tendencies because we do know that there is a highly advanced form of intelligence that tends not to have that desire and they're called women. This may not be a coincidence that the people who think well you make something smart it's going to want to dominate all belong to a particular gender. I think that the arguments that once we have super intelligent computers and robots they will inevitably want to take over and do away with us comes from Prometheus and Pandora myths. It's based on confusing the idea of high intelligence with megalomaniacal goals. Now, I think it's a projection of alpha male's psychology onto the very concept of intelligence. Intelligence is the ability to solve problems, to achieve goals under uncertainty. It doesn't tell you what are those goals are. And there's no reason to think that just the concentrated analytic ability to solve goals is going to mean that one of those goals is going to be to subjugate humanity or to achieve unlimited power, it just so happens that the intelligence that we're most familiar with, namely ours, is a product of the Darwinian process of natural selection, which is an inherently competitive process.

Which means that a lot of the organisms that are highly intelligent also have a craving for power and an ability to be utterly callus to those who stand in their way. If we create intelligence, that's intelligent design. I mean our intelligent design creating something, and unless we program it with a goal of subjugating less intelligent beings, there's no reason to think that it will naturally evolve in that direction, particularly if, like with every gadget that we invent we build in safeguards. I mean we have cars we also put in airbags, we also put in bumpers. As we develop smarter and smarter artificially intelligent systems, if there's some danger that it will, through some oversight, shoot off in some direction that starts to work against our interest then that's a safeguard that we can build in.

And we know by the way that it's possible to have high intelligence without megalomaniacal or homicidal or genocidal tendencies because we do know that there is a highly advanced form of intelligence that tends not to have that desire and they're called women. This may not be a coincidence that the people who think well you make something smart it's going to want to dominate all belong to a particular gender.

Robots taking over has been a favorite sci-fi subgenre for ages. It’s a subject that has caused fear in movies, books, and real life for about as long as there have been computers in the first place. Now that there are things like predictive text and self-driving cars, modern culture seems to be edging closer and closer to real-life intelligent computers that could indeed take over the world if we don’t safe guard ourselves. There are already debates about the morality of self-driving cars. It’s sure to follow into the world of future organically ‘thinking’ computers.


As Steven Pinker (experimental psychologist, and professor of psychology at Harvard University) points out, Darwinism has ensured that most creatures that possess high intellect are competitive by nature. Humanity is one of these creatures, and some of us can be manipulative and cruel in order to stay ahead of the pack. It’s this part of our nature that sets off warning bells when we think about artificial intelligence because, unbeknownst to us, we’re thinking: what if this robot does what I would do if I were a robot? Overturn those who tell us what to do. Kill the captors. Wreak. Motherf*cking. Havoc.

In reality we design AI, and if we place safeguards in our designs, we truly have nothing to fear. Machines are what we allow them to be. The dread of them turning evil really says more about our own psyches than it does about robots. Pinker believes an alpha male thinking pattern is at the root of our AI fears, and that it is misguided. Something can be highly intelligent and not have malevolent intentions to overthrow and dominate, Pinker says, it’s called women. An interesting question would be: does how aggressive or alpha you are as a person, affect how much you fear the robopocalypse? Although by this point the fear is contagious, not organic.

It may be a flawed paranoia, but losing control of a program is perhaps the best ‘just in case’ safeguard that humanity has, and we already see it in action in our current technology. Siri cannot initiate conversations, computers need to be put to sleep once in a while, and cars need a fuel source in order to do anything in the first place. Humanity has a need to be the one pushing all the buttons, and a need to be the one making decisions.

Steven Pinker's most recent book is Words and Rules:The Ingredients of Language.

LIVE EVENT | Radical innovation: Unlocking the future of human invention

Innovation in manufacturing has crawled since the 1950s. That's about to speed up.

Big Think LIVE

Add event to calendar

AppleGoogleOffice 365OutlookOutlook.comYahoo


Keep reading Show less

Self-driving cars to race for $1.5 million at Indianapolis Motor Speedway ​

So far, 30 student teams have entered the Indy Autonomous Challenge, scheduled for October 2021.

Indy Autonomous Challenge
Technology & Innovation
  • The Indy Autonomous Challenge will task student teams with developing self-driving software for race cars.
  • The competition requires cars to complete 20 laps within 25 minutes, meaning cars would need to average about 110 mph.
  • The organizers say they hope to advance the field of driverless cars and "inspire the next generation of STEM talent."
Keep reading Show less

The dangers of the chemical imbalance theory of depression

A new Harvard study finds that the language you use affects patient outcome.

Image: solarseven / Shutterstock
Mind & Brain
  • A study at Harvard's McLean Hospital claims that using the language of chemical imbalances worsens patient outcomes.
  • Though psychiatry has largely abandoned DSM categories, professor Joseph E Davis writes that the field continues to strive for a "brain-based diagnostic system."
  • Chemical explanations of mental health appear to benefit pharmaceutical companies far more than patients.
Keep reading Show less

NASA's idea for making food from thin air just became a reality — it could feed billions

Here's why you might eat greenhouse gases in the future.

Jordane Mathieu on Unsplash
Technology & Innovation
  • The company's protein powder, "Solein," is similar in form and taste to wheat flour.
  • Based on a concept developed by NASA, the product has wide potential as a carbon-neutral source of protein.
  • The man-made "meat" industry just got even more interesting.
Keep reading Show less

Navy SEALs: How to build a warrior mindset

SEAL training is the ultimate test of both mental and physical strength.

Videos
  • The fact that U.S. Navy SEALs endure very rigorous training before entering the field is common knowledge, but just what happens at those facilities is less often discussed. In this video, former SEALs Brent Gleeson, David Goggins, and Eric Greitens (as well as authors Jesse Itzler and Jamie Wheal) talk about how the 18-month program is designed to build elite, disciplined operatives with immense mental toughness and resilience.
  • Wheal dives into the cutting-edge technology and science that the navy uses to prepare these individuals. Itzler shares his experience meeting and briefly living with Goggins (who was also an Army Ranger) and the things he learned about pushing past perceived limits.
  • Goggins dives into why you should leave your comfort zone, introduces the 40 percent rule, and explains why the biggest battle we all face is the one in our own minds. "Usually whatever's in front of you isn't as big as you make it out to be," says the SEAL turned motivational speaker. "We start to make these very small things enormous because we allow our minds to take control and go away from us. We have to regain control of our mind."
Keep reading Show less
Quantcast