What it will take for AI to surpass human intelligence

Artificial Intelligence is already outsmarting us at '80s computer games by finding ways to beat games that developers didn't even know were there. Just wait until it figures out how to beat us in ways that matter.

Max Tegmark: I define intelligence simply as how good something is at accomplishing complex goals. 

Human intelligence today is very different from machine intelligence today in multiple ways. First of all, machine intelligence in the past used to be just an always inferior to human intelligence. 

Gradually machine intelligence got better than human intelligence in certain very, very narrow areas, like multiplying numbers fast like pocket calculators or remembering large amounts of data really fast. 

What we’re seeing now is that machine intelligence is spreading out a little bit from those narrow peaks and getting a bit broader. We still have nothing that is as broad as human intelligence, where a human child can learn to get pretty good at almost any goal, but you have systems now, for example, that can learn to play a whole swath of different kinds of computer games or to learn to drive a car in pretty varied environments. And uh...

Where things are obviously going in AI is increased breadth, and the Holy Grail of AI research is to build a machine that is as broad as human intelligence, it can get good at anything. And once that’s happened it’s very likely it’s not only going to be as broad as humans but also better than humans at all the tasks, as opposed to just some right now. 

I have to confess that I’m quite the computer nerd myself. I wrote some computer games back in high school and college, and more recently I’ve been doing a lot of deep learning research with my lab at MIT. 

So something that really blew me away like “whoa” was when I first saw this Google DeepMind system that learned to play computer games from scratch. 

You had this artificial simulated neural network, it didn’t know what a computer game was, it didn’t know what a computer was, it didn’t know what a screen was, you just fed in numbers that represented the different colors on the screen and told it that it could output different numbers corresponding to different key strokes, which also it didn’t know anything about, and then just kept feeding it the score, and all the software knew was to try to do randomly do stuff that would maximize that score.

I remember watching this on the screen once when Demis Hassabis, the CEO of Google DeepMind showed it, and seeing first how this thing really played total BS strategy and lost all the time. 

It gradually got better and better, and then it got better than I was, and then after a while it figured out this crazy strategy in Breakout (where you’re supposed to bounce a ball off of a brick wall) where it would keep aiming for the upper left corner until it punched a hole through there and got the ball bouncing around in the back and just racked up crazy many points. 

And I was like, “Whoa, that’s intelligent!” And the guys who programmed this didn’t even know about that strategy because they hadn’t played that game very much.

This is a simple example of how machine intelligence can surpass the intelligence of its creator, much in the same way as a human child can end up becoming more intelligent than its parents if educated well. 

This is just tiny little computers, the sort of hardware you can have on your desktop. If you now imagine scaling up to the biggest computer facilities we have in the world and you give us a couple of more decades of algorithm development, I think is very plausible that we can make machines that cannot just learn to play computer games better than us, but can view life as a game and to do everything better than us.

Chances are, unless you happen to be in the Big Think office in Manhattan, that you're watching this on a computer or phone. Chances also are that the piece of machinery that you're looking at right now has the capability to outsmart you many times over in ways that you can barely comprehend. That's the beauty and the danger of AI — it's becoming smarter and smarter at a rate that we can't keep up with. Max Tegmark relays a great story about playing a game of Breakout with a computer (i.e. the game where you break bricks with a ball and bounce the ball off a paddle you move at the bottom of the screen). At first, the computer lost every game. But quickly it had figured out a way to bounce the ball off of a certain point in the screen to rack up a crazy amount of points. Change Breakout for, let's say, nuclear warheads or solving world hunger, and we've got a world changer on our hands. Or in the case of our computers and smartphones, in our hands. Max's latest book is Life 3.0: Being Human in the Age of Artificial Intelligence

Photo: Luisa Conlon , Lacy Roberts and Hanna Miller / Global Oneness Project
Sponsored by Charles Koch Foundation
  • Stories are at the heart of learning, writes Cleary Vaughan-Lee, Executive Director for the Global Oneness Project. They have always challenged us to think beyond ourselves, expanding our experience and revealing deep truths.
  • Vaughan-Lee explains 6 ways that storytelling can foster empathy and deliver powerful learning experiences.
  • Global Oneness Project is a free library of stories—containing short documentaries, photo essays, and essays—that each contain a companion lesson plan and learning activities for students so they can expand their experience of the world.
Keep reading Show less

5 charts reveal key racial inequality gaps in the US

The inequalities impact everything from education to health.

ANGELA WEISS/AFP via Getty Images
Politics & Current Affairs

America is experiencing some of its most widespread civil unrest in years following the death of George Floyd.

Keep reading Show less

Ask an astronomer: What makes neutron stars so special?

Astrophysicist Michelle Thaller talks ISS and why NICER is so important.

Videos
  • Being outside of Earth's atmosphere while also being able to look down on the planet is both a challenge and a unique benefit for astronauts conducting important and innovative experiments aboard the International Space Station.
  • NASA astrophysicist Michelle Thaller explains why one such project, known as NICER (Neutron star Interior Composition Explorer), is "one of the most amazing discoveries of the last year."
  • Researchers used x-ray light data from NICER to map the surface of neutrons (the spinning remnants of dead stars 10-50 times the mass of our sun). Thaller explains how this data can be used to create a clock more accurate than any on Earth, as well as a GPS device that can be used anywhere in the galaxy.

Four philosophers who realized they were completely wrong about things

Philosophers like to present their works as if everything before it was wrong. Sometimes, they even say they have ended the need for more philosophy. So, what happens when somebody realizes they were mistaken?

Sartre and Wittgenstein realize they were mistaken. (Getty Images)
Culture & Religion

Sometimes philosophers are wrong and admitting that you could be wrong is a big part of being a real philosopher. While most philosophers make minor adjustments to their arguments to correct for mistakes, others make large shifts in their thinking. Here, we have four philosophers who went back on what they said earlier in often radical ways. 

Keep reading Show less

Ashamed over my mental illness, I realized drawing might help me – and others – cope

Just before I turned 60, I discovered that sharing my story by drawing could be an effective way to both alleviate my symptoms and combat that stigma.

Photo by JJ Ying on Unsplash
Mind & Brain

I've lived much of my life with anxiety and depression, including the negative feelings – shame and self-doubt – that seduced me into believing the stigma around mental illness: that people knew I wasn't good enough; that they would avoid me because I was different or unstable; and that I had to find a way to make them like me.

Keep reading Show less