Skip to content
Who's in the Video
Max Tegmark left his native Sweden in 1990 after receiving his B.Sc. in Physics from the Royal Institute of Technology (he’d earned a B.A. in Economics the previous year at[…]

Max Tegmark has a bone to pick with Hollywood. We shouldn’t be afraid of AI or, for that matter, a robot uprising. We should be more afraid of the next few years while we try and get AI through this early phase. Right now, just the same way a child would, machines take us literally. The key to the next few years is getting them to understand and adopt human logic—i.e. killing is bad and that just because you can doesn’t mean you should—because if we don’t set those boundaries now, in the future we may be viewed as nothing more than ants in their way.

Max’s latest book is Life 3.0


MAX TEGMARK: Hollywood movies make people worry about the wrong things in terms of super intelligence. What we should really worry about is not malice but competence, where we have machines that are smarter than us whose goals just aren’t aligned with ours. For example, I don’t hate ants, I don’t go out of my way to stomp an ant if I see one on the sidewalk, but if I’m in charge of this hydroelectric dam construction and just as I’m going to flood this valley with water I see an ant hill there, tough luck for the ants. Their goals weren’t aligned with mine and because I’m smarter it’s going to be my goals, not the ant’s goals, that get fulfilled. We never want to put humanity in the role of those ants. 

On the other hand it doesn’t have to be bad if you solve the goal alignment problem. Little babies tend to be in a household surrounded by human level intelligence as they’re smarter than the babies, namely their parents. And that works out fine because the goals of the parents are wonderfully aligned with the goals of the child’s so it’s all good. And this is one vision that a lot of AI researchers have, the friendly AI vision that we will succeed in not just making machines that are smarter than us, but also machines that then learn, adopt and retain our goals as they get ever smarter.

It might sound easy to get machines to learn, adopt and retain our goals, but these are all very tough problems. First of all, if you take a self-driving taxi and tell it in the future to take you to the airport as fast as possible and then you get there covered in vomit and chased by helicopters and you say, “No, no, no! That’s not what I wanted!” and it replies, “That is exactly what you asked for,” then you’ve appreciated how hard it is to get a machine to understand your goals, your actual goals. 

A human cabdriver would have realized that you also had other goals that were unstated because she was also a human and has all this shared reference frame, but a machine doesn’t have that unless we explicitly teach it that. And then once the machine understands our goals there’s a separate problem of getting them to adopt the goals. Anyone who has had kids knows how big the difference is between making the kids understand what you want and actually adopt your goals to do what you want. 

And finally, even if you can get your kids to adopt your goals that doesn’t mean they’re going to retain them for life. My kids are a lot less excited about Lego now than they were when they were little, and we don’t want machines as they get ever-smarter to gradually change their goals away from being excited about protecting us and thinking of this thing about taking care of humanity as this little childhood thing (like Legos) that they get bored with eventually. 

If we can solve all three of these challenges, getting machines to understand our goals, adopt them and retain them then we can create an awesome future. Because everything I love about civilization is a product of intelligence. Then if we can use machines to amplify our intelligence then we have this potential to solve all the problems that are stumping us today and create a better future than we even dare to dream of. 

If machines ever surpass us and can outsmart us at all tasks that’s going to be a really big deal because intelligence is power. The reason that we humans have more power on this planet than tigers is not because we have larger muscles or sharper claws, it’s because we’re smarter than the tigers. And in the exact same way if machines are smarter than us it becomes perfectly plausible for them to control us and become the rulers of this planet and beyond. When I. J. Good made this famous analysis of how you could get an intelligence explosion, or intelligence just kept creating greater and greater intelligence leaving us far behind, he also mentioned that this super intelligence would be the last invention that man need ever make. And what he meant by that, of course, was that so far the most intelligent being on this planet that’s been doing all the inventing—it’s been us. But once we make machines that are better than us at inventing, all future technology that we ever need can be created by those machines if we can make sure that they do things for us that we want and help us create an awesome future where humanity can flourish like never before.


Related