David Goggins
Former Navy Seal
Career Development
Bryan Cranston
Critical Thinking
Liv Boeree
International Poker Champion
Emotional Intelligence
Amaryllis Fox
Former CIA Clandestine Operative
Chris Hadfield
Retired Canadian Astronaut & Author
from the world's big
Start Learning

Why Superintelligent AI Could Be the Last Human Invention

When we create something more intelligent than we could ever be, what happens after that? We have to teach it.

MAX TEGMARK: Hollywood movies make people worry about the wrong things in terms of super intelligence. What we should really worry about is not malice but competence, where we have machines that are smarter than us whose goals just aren’t aligned with ours. For example, I don’t hate ants, I don’t go out of my way to stomp an ant if I see one on the sidewalk, but if I’m in charge of this hydroelectric dam construction and just as I’m going to flood this valley with water I see an ant hill there, tough luck for the ants. Their goals weren’t aligned with mine and because I’m smarter it’s going to be my goals, not the ant’s goals, that get fulfilled. We never want to put humanity in the role of those ants. 

On the other hand it doesn’t have to be bad if you solve the goal alignment problem. Little babies tend to be in a household surrounded by human level intelligence as they’re smarter than the babies, namely their parents. And that works out fine because the goals of the parents are wonderfully aligned with the goals of the child’s so it’s all good. And this is one vision that a lot of AI researchers have, the friendly AI vision that we will succeed in not just making machines that are smarter than us, but also machines that then learn, adopt and retain our goals as they get ever smarter.

It might sound easy to get machines to learn, adopt and retain our goals, but these are all very tough problems. First of all, if you take a self-driving taxi and tell it in the future to take you to the airport as fast as possible and then you get there covered in vomit and chased by helicopters and you say, “No, no, no! That’s not what I wanted!” and it replies, “That is exactly what you asked for,” then you’ve appreciated how hard it is to get a machine to understand your goals, your actual goals. 

A human cabdriver would have realized that you also had other goals that were unstated because she was also a human and has all this shared reference frame, but a machine doesn’t have that unless we explicitly teach it that. And then once the machine understands our goals there’s a separate problem of getting them to adopt the goals. Anyone who has had kids knows how big the difference is between making the kids understand what you want and actually adopt your goals to do what you want. 

And finally, even if you can get your kids to adopt your goals that doesn’t mean they’re going to retain them for life. My kids are a lot less excited about Lego now than they were when they were little, and we don’t want machines as they get ever-smarter to gradually change their goals away from being excited about protecting us and thinking of this thing about taking care of humanity as this little childhood thing (like Legos) that they get bored with eventually. 

If we can solve all three of these challenges, getting machines to understand our goals, adopt them and retain them then we can create an awesome future. Because everything I love about civilization is a product of intelligence. Then if we can use machines to amplify our intelligence then we have this potential to solve all the problems that are stumping us today and create a better future than we even dare to dream of. 

If machines ever surpass us and can outsmart us at all tasks that’s going to be a really big deal because intelligence is power. The reason that we humans have more power on this planet than tigers is not because we have larger muscles or sharper claws, it’s because we’re smarter than the tigers. And in the exact same way if machines are smarter than us it becomes perfectly plausible for them to control us and become the rulers of this planet and beyond. When I. J. Good made this famous analysis of how you could get an intelligence explosion, or intelligence just kept creating greater and greater intelligence leaving us far behind, he also mentioned that this super intelligence would be the last invention that man need ever make. And what he meant by that, of course, was that so far the most intelligent being on this planet that’s been doing all the inventing—it’s been us. But once we make machines that are better than us at inventing, all future technology that we ever need can be created by those machines if we can make sure that they do things for us that we want and help us create an awesome future where humanity can flourish like never before.

Max Tegmark has a bone to pick with Hollywood. We shouldn't be afraid of AI or, for that matter, a robot uprising. We should be more afraid of the next few years while we try and get AI through this early phase. Right now, just the same way a child would, machines take us literally. The key to the next few years is getting them to understand and adopt human logic—i.e. killing is bad and that just because you can doesn't mean you should—because if we don't set those boundaries now, in the future we may be viewed as nothing more than ants in their way.

Max's latest book is Life 3.0

Live on Tuesday | Personal finance in the COVID-19 era

Sallie Krawcheck and Bob Kulhan will be talking money, jobs, and how the pandemic will disproportionally affect women's finances.

Women who go to church have more kids—and more help

Want help raising your kids? Spend more time at church, says new study.

Culture & Religion
  • Religious people tend to have more children than secular people, but why remains unknown.
  • A new study suggests that the social circles provided by regular church going make raising kids easier.
  • Conversely, having a large secular social group made women less likely to have children.
Keep reading Show less

Bubonic plague case reported in China

Health officials in China reported that a man was infected with bubonic plague, the infectious disease that caused the Black Death.

(Photo by Centers for Disease Control and Prevention/Getty Images)
  • The case was reported in the city of Bayannur, which has issued a level-three plague prevention warning.
  • Modern antibiotics can effectively treat bubonic plague, which spreads mainly by fleas.
  • Chinese health officials are also monitoring a newly discovered type of swine flu that has the potential to develop into a pandemic virus.
Keep reading Show less

Leonardo da Vinci could visually flip between dimensions, neuroscientist claims

A neuroscientist argues that da Vinci shared a disorder with Picasso and Rembrandt.

Christopher Tyler
Mind & Brain
  • A neuroscientist at the City University of London proposes that Leonardo da Vinci may have had exotropia, allowing him to see the world with impaired depth perception.
  • If true, it means that Da Vinci would have been able to see the images he wanted to paint as they would have appeared on a flat surface.
  • The finding reminds us that sometimes looking at the world in a different way can have fantastic results.
Keep reading Show less

Education vs. learning: How semantics can trigger a mind shift

The word "learning" opens up space for more people, places, and ideas.

Future of Learning
  • The terms 'education' and 'learning' are often used interchangeably, but there is a cultural connotation to the former that can be limiting. Education naturally links to schooling, which is only one form of learning.
  • Gregg Behr, founder and co-chair of Remake Learning, believes that this small word shift opens up the possibilities in terms of how and where learning can happen. It also becomes a more inclusive practice, welcoming in a larger, more diverse group of thinkers.
  • Post-COVID, the way we think about what learning looks like will inevitably change, so it's crucial to adjust and begin building the necessary support systems today.
Keep reading Show less