Skip to content
Who's in the Video
Max Tegmark left his native Sweden in 1990 after receiving his B.Sc. in Physics from the Royal Institute of Technology (he’d earned a B.A. in Economics the previous year at[…]

Right now, AI can’t tell the difference between a cat and a dog. AI needs thousands of pictures in order to correctly identify a dog from a cat, whereas human babies and toddlers only need to see each animal once to know the difference. But AI won’t be that way forever, says AI expert and author Max Tegmark, because it hasn’t learned how to self-replicate its own intelligence. However, once AI learns how to master AGI—or Artificial General Intelligence—it will be able to upgrade itself, thereby being able to blow right past us. A sobering thought. Max’s book Life 3.0: Being Human in the Age of Artificial Intelligence is being heralded as one of the best books on AI, period, and is a must-read if you’re interested in the subject.

Max Tegmark: I define intelligence as how good something is at accomplishing complex goals. So let’s unpack that a little bit. First of all, it’s a spectrum of abilities since there are many different goals you can have, so it makes no sense to quantify something’s intelligence by just one number like an IQ.

To see how ridiculous that would be, just imagine if I told you that athletic ability could be quantified by a single number, the “Athletic Quotient,” and whatever athlete had the highest AQ would win all the gold medals in the Olympics. It’s the same with intelligence.

So if you have a machine that’s pretty good at some tasks, these days it’s usually pretty narrow intelligence, maybe the machine is very good at multiplying numbers fast because it’s your pocket calculator, maybe it’s good at driving cars or playing Go.

Humans, on the other hand, have a remarkably broad intelligence. A human child can learn almost anything given enough time. Even though we now have machines that can learn, sometimes learn to do certain narrow tasks better than humans, machine learning is still very unimpressive compared to human learning. For example, it might take a machine tens of thousands of pictures of cats and dogs until it becomes able to tell a cat from a dog, whereas human children can sometimes learn what a cat is from seeing it once. Another area where we have a long way to go in AI is generalizing.

If a human learns to play one particular kind of game they can very quickly take that knowledge and apply it to some other kind of game or some other life situation altogether.

And this is a fascinating frontier of AI research now: How can we have machines—how can we can make them as good at learning from very limited data as people are?

And I think part of the challenge is that we humans aren’t just learning to recognize some patterns, we also gradually learn to develop a whole model of the world.

So if you ask “Are there machines that are more intelligent than people today,” there are machines that are better than us at accomplishing some goals, but absolutely not all goals.

AGI, artificial general intelligence, that’s the dream of the field of AI: to build a machine that’s better than us at all goals. We’re not there yet, but a good fraction of leading AI researchers think we are going to get there maybe in a few decades. And if that happens you have to ask yourself if that might lead to machines getting not just a little better than us, but way better at all goals, having super intelligence.

The argument for that is actually really interesting and goes back to the ‘60s, to the mathematician I. J. Goode, who pointed out that the goal of building an intelligent machine is in and of itself something that you can do with intelligence.

So once you get machines that are better than us at that narrow task of building AI, then future AIs can be built by not human engineers but by machines, except they might do it thousands or a million times faster. So in my book, I explore the scenario where you have this computer called Prometheus, which has vastly more hardware than a human brain does, and it’s still very limited by its software being kind of dumb.

So at the point where it gets human-level general intelligence, the first thing it does is it uses this to realize, “Oh! I can reprogram my software to become much better,” and now it’s a lot smarter. And a few minutes later it does this again, and then it does it again and does it again, and in a matter of perhaps a few days or weeks, a machine like that might be able to become not just a little bit smarter than us but leave us far, far behind.

I think a lot of people dismiss this kind of talk of super intelligence as science fiction because we’re stuck in this sort of carbon chauvinism idea that intelligence can only exist in biological organisms made of cells and carbon atoms. As a physicist, from my perspective intelligence is just a kind of information processing preformed by elementary particles moving around according to the laws of physics. And there’s absolutely no law in physics that says you can’t do that in ways that are much more intelligent than humans.

We’re so limited by how much brain matter fits through our mommy’s birth canal and stuff like this, and machines are not, so I think it’s very likely that once machines reach human-level they’re not going to stop there; they’ll just blow right by, and that we might one day have machines that are as much smarter than us as we are smarter than snails.