We’re smart enough to create intelligent machines. But are we wise enough?
What is the danger in creating something smarter than you? You can't control it, and pretty soon it could control you.
Max Tegmark left his native Sweden in 1990 after receiving his B.Sc. in Physics from the Royal Institute of Technology (he’d earned a B.A. in Economics the previous year at the Stockholm School of Economics). His first academic venture beyond Scandinavia brought him to California, where he studied physics at the University of California, Berkeley, earning his M.A. in 1992, and Ph.D. in 1994.
After four years of west coast living, Tegmark returned to Europe and accepted an appointment as a research associate with the Max-Planck-Institut für Physik in Munich. In 1996 he headed back to the U.S. as a Hubble Fellow and member of the Institute for Advanced Study, Princeton. Tegmark remained in New Jersey for a few years until an opportunity arrived to experience the urban northeast with an Assistant Professorship at the University of Pennsylvania, where he received tenure in 2003.
He extended the east coast experiment and moved north of Philly to the shores of the Charles River (Cambridge-side), arriving at MIT in September 2004. He is married to Meia-Chita Tegmark and has two sons, Philip and Alexander.
Tegmark is an author on more than two hundred technical papers, and has featured in dozens of science documentaries. He has received numerous awards for his research, including a Packard Fellowship (2001-06), Cottrell Scholar Award (2002-07), and an NSF Career grant (2002-07), and is a Fellow of the American Physical Society. His work with the SDSS collaboration on galaxy clustering shared the first prize in Science magazine’s "Breakthrough of the Year: 2003."
Max Tegmark: I’m optimistic that we can create an awesome future with technology as long as we win the race between the growing power of the tech and the growing wisdom with which we manage the tech.
This is actually getting harder because of nerdy technical developments in the AI field.
It used to be, when we wrote state-of-the-art AI—like for example IBM’s Deep Blue computer who defeated Gary Kasparov in chess a couple of decades ago—that all the intelligence was basically programmed in by humans who knew how to play chess and then the computer won the game just because it could think faster and remember more. But we understood the software well.
Understanding what your AI system does is one of those pieces of wisdom you have to have to be able to really trust it.
The reason we have so many problems today with systems getting hacked or crashing because of bugs is exactly because we didn’t understand the systems as well as we should have.
Now what’s happening is fascinating, today’s biggest AI breakthroughs are a completely different kind where rather than the intelligence being largely programmed in an easy-to-understand code, you put in almost nothing except a little learning rule by which a simulated arc of neurons can take a lot of data and figure out how to get stuff done.
This deep learning suddenly becomes able to do things often even better than the programmers were ever able to do.
You can train a machine to play computer games with almost no hard-coded stuff at all. You don’t tell it what a game is, what the things are on the screen, or even that there is such a thing as a screen—you just feed in a bunch of data about the colors of the pixels and tell it, “Hey go ahead and maximize that number in the upper left corner,” and gradually you come back and it’s playing some game much better than I could.
The challenge with this, even though it’s very powerful, this is very much “blackbox” now where, yeah it does all that great stuff—and we don’t understand how.
So suppose I get sentenced to ten years in prison by a Robojudge in the future and I ask, “Why?”
And I’m told, “I WAS TRAINED ON SEVEN TERABYTES OF DATA, AND THIS WAS THE DECISION,” It’s not that satisfying for me.
Or suppose the machine that’s in charge of our electric power grid suddenly malfunctions and someone says, “Well, we have no idea why. We trained it on a lot of data and it worked,” that doesn’t instill the kind of trust that we want to put into systems.
When you get the blue screen of death when your Windows machine crashes or the spinning wheel of doom because your Mac crashes, “annoying” is probably the main emotion we have, but “annoying” isn’t the emotion we have if it’s myself flying an airplane and it crashes, or the software controlling the nuclear arsenal of the U.S., or something like that.
And as AI gets more and more out into the world we absolutely need to transform today’s packable and buggy AI systems into AI systems that we can really trust.
Some of the most intelligent people at the most highly-funded companies in the world can't seem to answer this simple question: what is the danger in creating something smarter than you? They've created AI so smart that the "deep learning" that it's outsmarting the people that made it. The reason is the "blackbox" style code that the AI is based off of—it's built solely to become smarter, and we have no way to regulate that knowledge. That might not seem like a terrible thing if you want to build superintelligence. But we've all experienced something minor going wrong, or a bug, in our current electronics. Imagine that, but in a Robojudge that can sentence you to 10 years in prison without explanation other than "I've been fed data and this is what I compute"... or a bug in the AI of a busy airport. We need regulation now before we create something we can't control. Max's book Life 3.0: Being Human in the Age of Artificial Intelligence is being heralded as one of the best books on AI, period, and is a must-read if you're interested in the subject.
The Bajau people's nomadic lifestyle has given them remarkable adaptions, enabling them to stay underwater for unbelievable periods of time. Their lifestyle, however, is quickly disappearing.
- The Bajau people travel in small flotillas throughout the Phillipines, Malaysia, and Indonesia, hunting fish underwater for food.
- Over the years, practicing this lifestyle has given the Bajau unique adaptations to swimming underwater. Many find it straightforward to dive up to 13 minutes 200 feet below the surface of the ocean.
- Unfortunately, many disparate factors are erasing the traditional Bajau way of life.
Here's the first evidence to challenge the "fastest sperm" narrative.
An innovation may lead to lifelike evolving machines.
- Scientists at Cornell University devise a material with 3 key traits of life.
- The goal for the researchers is not to create life but lifelike machines.
- The researchers were able to program metabolism into the material's DNA.
SMARTER FASTER trademarks owned by The Big Think, Inc. All rights reserved.