Superintelligence: How A.I. will overcome humans
A sobering thought to anyone laughing off the thought of robot overlords.
Max Tegmark left his native Sweden in 1990 after receiving his B.Sc. in Physics from the Royal Institute of Technology (he’d earned a B.A. in Economics the previous year at the Stockholm School of Economics). His first academic venture beyond Scandinavia brought him to California, where he studied physics at the University of California, Berkeley, earning his M.A. in 1992, and Ph.D. in 1994.
After four years of west coast living, Tegmark returned to Europe and accepted an appointment as a research associate with the Max-Planck-Institut für Physik in Munich. In 1996 he headed back to the U.S. as a Hubble Fellow and member of the Institute for Advanced Study, Princeton. Tegmark remained in New Jersey for a few years until an opportunity arrived to experience the urban northeast with an Assistant Professorship at the University of Pennsylvania, where he received tenure in 2003.
He extended the east coast experiment and moved north of Philly to the shores of the Charles River (Cambridge-side), arriving at MIT in September 2004. He is married to Meia-Chita Tegmark and has two sons, Philip and Alexander.
Tegmark is an author on more than two hundred technical papers, and has featured in dozens of science documentaries. He has received numerous awards for his research, including a Packard Fellowship (2001-06), Cottrell Scholar Award (2002-07), and an NSF Career grant (2002-07), and is a Fellow of the American Physical Society. His work with the SDSS collaboration on galaxy clustering shared the first prize in Science magazine’s "Breakthrough of the Year: 2003."
Max Tegmark: I define intelligence as how good something is at accomplishing complex goals. So let’s unpack that a little bit. First of all, it’s a spectrum of abilities since there are many different goals you can have, so it makes no sense to quantify something’s intelligence by just one number like an IQ.
To see how ridiculous that would be, just imagine if I told you that athletic ability could be quantified by a single number, the “Athletic Quotient,” and whatever athlete had the highest AQ would win all the gold medals in the Olympics. It’s the same with intelligence.
So if you have a machine that’s pretty good at some tasks, these days it’s usually pretty narrow intelligence, maybe the machine is very good at multiplying numbers fast because it’s your pocket calculator, maybe it’s good at driving cars or playing Go.
Humans, on the other hand, have a remarkably broad intelligence. A human child can learn almost anything given enough time. Even though we now have machines that can learn, sometimes learn to do certain narrow tasks better than humans, machine learning is still very unimpressive compared to human learning. For example, it might take a machine tens of thousands of pictures of cats and dogs until it becomes able to tell a cat from a dog, whereas human children can sometimes learn what a cat is from seeing it once. Another area where we have a long way to go in AI is generalizing.
If a human learns to play one particular kind of game they can very quickly take that knowledge and apply it to some other kind of game or some other life situation altogether.
And this is a fascinating frontier of AI research now: How can we have machines—how can we can make them as good at learning from very limited data as people are?
And I think part of the challenge is that we humans aren’t just learning to recognize some patterns, we also gradually learn to develop a whole model of the world.
So if you ask “Are there machines that are more intelligent than people today,” there are machines that are better than us at accomplishing some goals, but absolutely not all goals.
AGI, artificial general intelligence, that’s the dream of the field of AI: to build a machine that’s better than us at all goals. We’re not there yet, but a good fraction of leading AI researchers think we are going to get there maybe in a few decades. And if that happens you have to ask yourself if that might lead to machines getting not just a little better than us, but way better at all goals, having super intelligence.
The argument for that is actually really interesting and goes back to the ‘60s, to the mathematician I. J. Goode, who pointed out that the goal of building an intelligent machine is in and of itself something that you can do with intelligence.
So once you get machines that are better than us at that narrow task of building AI, then future AIs can be built by not human engineers but by machines, except they might do it thousands or a million times faster. So in my book, I explore the scenario where you have this computer called Prometheus, which has vastly more hardware than a human brain does, and it’s still very limited by its software being kind of dumb.
So at the point where it gets human-level general intelligence, the first thing it does is it uses this to realize, “Oh! I can reprogram my software to become much better,” and now it’s a lot smarter. And a few minutes later it does this again, and then it does it again and does it again, and in a matter of perhaps a few days or weeks, a machine like that might be able to become not just a little bit smarter than us but leave us far, far behind.
I think a lot of people dismiss this kind of talk of super intelligence as science fiction because we’re stuck in this sort of carbon chauvinism idea that intelligence can only exist in biological organisms made of cells and carbon atoms. As a physicist, from my perspective intelligence is just a kind of information processing preformed by elementary particles moving around according to the laws of physics. And there’s absolutely no law in physics that says you can’t do that in ways that are much more intelligent than humans.
We’re so limited by how much brain matter fits through our mommy’s birth canal and stuff like this, and machines are not, so I think it’s very likely that once machines reach human-level they’re not going to stop there; they’ll just blow right by, and that we might one day have machines that are as much smarter than us as we are smarter than snails.
Right now, AI can't tell the difference between a cat and a dog. AI needs thousands of pictures in order to correctly identify a dog from a cat, whereas human babies and toddlers only need to see each animal once to know the difference. But AI won't be that way forever, says AI expert and author Max Tegmark, because it hasn't learned how to self-replicate its own intelligence. However, once AI learns how to master AGI—or Artificial General Intelligence—it will be able to upgrade itself, thereby being able to blow right past us. A sobering thought. Max's book Life 3.0: Being Human in the Age of Artificial Intelligence is being heralded as one of the best books on AI, period, and is a must-read if you're interested in the subject.
Get smarter, faster. Subscribe to our daily newsletter.
Using machine-learning technology, the genealogy company My Heritage enables users to animate static images of their relatives.
- Deep Nostalgia uses machine learning to animate static images.
- The AI can animate images by "looking" at a single facial image, and the animations include movements such as blinking, smiling and head tilting.
- As deepfake technology becomes increasingly sophisticated, some are concerned about how bad actors might abuse the technology to manipulate the pubic.
My Heritage/Deep Nostalgia<p>But that's not to say the animations are perfect. As with most deep-fake technology, there's still an uncanny air to the images, with some of the facial movements appearing slightly unnatural. What's more, Deep Nostalgia is only able to create deepfakes of one person's face from the neck up, so you couldn't use it to animate group photos, or photos of people doing any sort of physical activity.</p>
My Heritage/Deep Nostalgia<p>But for a free deep-fake service, Deep Nostalgia is pretty impressive, especially considering you can use it to create deepfakes of <em>any </em>face, human or not. </p>
How long should one wait until an idea like string theory, seductive as it may be, is deemed unrealistic?
- How far should we defend an idea in the face of contrarian evidence?
- Who decides when it's time to abandon an idea and deem it wrong?
- Science carries within it its seeds from ancient Greece, including certain prejudices of how reality should or shouldn't be.
Plato used the allegory of the cave to explain that what humans see and experience is not the true reality.
Credit: Gothika via Wikimedia Commons CC 4.0<p>When scientists and mathematicians use the term <em>Platonic worldview</em>, that's what they mean in general: The unbound capacity of reason to unlock the secrets of creation, one by one. Einstein, for one, was a believer, preaching the fundamental reasonableness of nature; no weird unexplainable stuff, like a god that plays dice—his tongue-in-cheek critique of the belief that the unpredictability of the quantum world was truly fundamental to nature and not just a shortcoming of our current understanding. Despite his strong belief in such underlying order, Einstein recognized the imperfection of human knowledge: "What I see of Nature is a magnificent structure that we can comprehend only very imperfectly, and that must fill a thinking person with a feeling of humility." (Quoted by Dukas and Hoffmann in <em>Albert Einstein, The Human Side: Glimpses from His Archives</em> (1979), 39.)</p> <p>Einstein embodies the tension between these two clashing worldviews, a tension that is still very much with us today: On the one hand, the Platonic ideology that the fundamental stuff of reality is logical and understandable to the human mind, and, on the other, the acknowledgment that our reasoning has limitations, that our tools have limitations and thus that to reach some sort of final or complete understanding of the material world is nothing but an impossible, <a href="https://www.amazon.com/dp/B01K2JTGIA?tag=bigthink00-20&linkCode=ogi&th=1&psc=1" target="_blank" rel="noopener noreferrer">semi-religious dream</a>.</p>
A physicist creates an AI algorithm that predicts natural events and may prove the simulation hypothesis.
- Princeton physicist Hong Qin creates an AI algorithm that can predict planetary orbits.
- The scientist partially based his work on the hypothesis which believes reality is a simulation.
- The algorithm is being adapted to predict behavior of plasma and can be used on other natural phenomena.
Physicist Hong Qin with images of planetary orbits and computer code.
Credit: Elle Starkman
Are we living in a simulation? | Bill Nye, Joscha Bach, Donald Hoffman | Big Think<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="4dbe18924f2f42eef5669e67f405b52e"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/KDcNVZjaNSU?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span>
The vaccine will shorten the "shedding" time.