We’re smart enough to create intelligent machines. But are we wise enough?

What is the danger in creating something smarter than you? You can't control it, and pretty soon it could control you.

Technology & Innovation

Some of the most intelligent people at the most highly-funded companies in the world can't seem to answer this simple question: what is the danger in creating something smarter than you? They've created AI so smart that the "deep learning" that it's outsmarting the people that made it. The reason is the "blackbox" style code that the AI is based off of—it's built solely to become smarter, and we have no way to regulate that knowledge. That might not seem like a terrible thing if you want to build superintelligence. But we've all experienced something minor going wrong, or a bug, in our current electronics. Imagine that, but in a Robojudge that can sentence you to 10 years in prison without explanation other than "I've been fed data and this is what I compute"... or a bug in the AI of a busy airport. We need regulation now before we create something we can't control. Max's book Life 3.0: Being Human in the Age of Artificial Intelligence is being heralded as one of the best books on AI, period, and is a must-read if you're interested in the subject.

What it will take for AI to surpass human intelligence

Artificial Intelligence is already outsmarting us at '80s computer games by finding ways to beat games that developers didn't even know were there. Just wait until it figures out how to beat us in ways that matter.

Technology & Innovation

Chances are, unless you happen to be in the Big Think office in Manhattan, that you're watching this on a computer or phone. Chances also are that the piece of machinery that you're looking at right now has the capability to outsmart you many times over in ways that you can barely comprehend. That's the beauty and the danger of AI — it's becoming smarter and smarter at a rate that we can't keep up with. Max Tegmark relays a great story about playing a game of Breakout with a computer (i.e. the game where you break bricks with a ball and bounce the ball off a paddle you move at the bottom of the screen). At first, the computer lost every game. But quickly it had figured out a way to bounce the ball off of a certain point in the screen to rack up a crazy amount of points. Change Breakout for, let's say, nuclear warheads or solving world hunger, and we've got a world changer on our hands. Or in the case of our computers and smartphones, in our hands. Max's latest book is Life 3.0: Being Human in the Age of Artificial Intelligence

The Meaning of Life: It Could Be Just a Quirk—or Quark—of Consciousness

Is science destined to crack the code of consciousness—and how would we even go about it?

Mind & Brain

In the centuries since Galileo proved heliocentrism, science has gradually come to understand more and more of our universe's natural phenomena: gravity, quantum mechanics, even ripples in space-time. But the final frontier of science isn't out there, says cosmologist and MIT professor Max Tegmark, it's the world inside our heads: consciousness. It's a highly divisive issue—some scientists think it's unimportant or a question for philosophers, while others like Tegmark think that the human experience and the meaning and purpose of life would disappear if the lights of our consciousness were to go out. Ultimately, Tegmark thinks we can understand consciousness scientifically by finding the pattern of matter from which consciousness springs. What is the difference between your brain and the food you feed it? It's all quarks, says Tegmark, the difference is the pattern they're arranged into. So how can we develop a theory of consciousness? Can we build a consciousness detector? And can we really understand what we are without unlocking humanity's greatest mystery? Tegmark muses on all of this above. Max's latest book is Life 3.0: Being Human in the Age of Artificial Intelligence

Why Superintelligent AI Could Be the Last Human Invention

When we create something more intelligent than we could ever be, what happens after that? We have to teach it.

Technology & Innovation

Max Tegmark has a bone to pick with Hollywood. We shouldn't be afraid of AI or, for that matter, a robot uprising. We should be more afraid of the next few years while we try and get AI through this early phase. Right now, just the same way a child would, machines take us literally. The key to the next few years is getting them to understand and adopt human logic—i.e. killing is bad and that just because you can doesn't mean you should—because if we don't set those boundaries now, in the future we may be viewed as nothing more than ants in their way.

Max's latest book is Life 3.0

Keep reading Show less