There’s no way we could stop a rogue AI

Max Planck Institute scientists crash into a computing wall there seems to be no way around.

There’s no way we could stop a rogue AI
Credit: josefkubes/Adobe Stock
  • Artificial intelligence that's smarter than us could potentially solve problems beyond our grasp.
  • AI that are self-learning can absorb whatever information they need from the internet, a Pandora's Box if ever there was one.
  • The nature of computing itself prevents us from limiting the actions of a super-intelligent AI if it gets out of control.

There have been a fair number of voices—Stephen Hawking among them—raised in warning that a super-intelligent artificial intelligence could one day turn on us and that we shouldn't be in such a hot, unquestioning hurry to develop true AI. Others say, naw, don't worry. Now a new white paper from scientists at the Center for Humans and Machines at the Max Planck Institute for Human Development presents a series of theoretical tests that confirm the threat: Due to the basic concepts underlying computing, we would be utterly unable to control a super-intelligent AI.

"We argue that total containment is, in principle, impossible, due to fundamental limits inherent to computing itself," write the paper's authors.

The white paper is published in the Journal of Artificial Intelligence Research.

Why worry?

Credit: @nt/Adobe Stock

"A super-intelligent machine that controls the world sounds like science fiction," says paper co-author Manuel Cebrian in a press release. "But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity."

The lure of AI is clear. Its ability to "see" the patterns in data make it a promising agent for solving problems too complex for us to wrap our minds around. Could it cure cancer? Solve the climate crisis? The possibilities are nearly endless.

Connected to the internet, AI can grab whatever information it needs to achieve its task, and therein lies a big part of the danger. With access to every bit of human data—and responsible for its own education—who knows what lessons it would learn regardless of any ethical constraints built into its programming? Who knows what goals it would embrace and what it might do to achieve them?

Even assuming benevolence, there's danger. Suppose that an AI is confronted by an either/or choice akin to the Trolley Dilemma, maybe even on a grand scale: Might an AI decide to annihilate millions of people if it decided the remaining billions would stand a better chance of survival?

A pair of flawed options

Credit: Maxim_Kazmin/Adobe Stock

The most obvious way to keep a super intelligent AI from getting ahead of us is to limit its access to information by preventing it from connecting to the internet. The problem with limiting access to information, though, is that it would make any problem we assign the AI more difficult for it to solve. We would be weakening its problem-solving promise possibly to a point of uselessness.

The second approach that might be taken is to limit what a super-intelligent AI is capable of doing by programming into it certain boundaries. This might be akin to writer Isaac Asimov's Laws of Robotics, the first of which goes: "A robot may not injure a human being or, through inaction, allow a human being to come to harm."

Unfortunately, says the study, a series of logical tests reveal that it's impossible to create such limits. Any such a containment algorithm, it turns out, would be self-defeating.

Containment is impossible

Credit: UncleFredDesign/Adobe Stock

"If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable."

The team investigated stacking containment algorithms, with each monitoring the behavior of the previous one, but eventually the same problem arises: The final check halts itself, rendering it unreliable.

Too smart?

The Planck researchers also concluded that a similar bit of logic makes it impossible for us to know when a self-learning computer's intelligence has come to exceed our own. Essentially, we're not smart enough to be able to develop tests for intelligence superior to ours.

"Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do." — Alan Turing

This means that it's entirely conceivable that an AI capable of self-learning may well quietly ascend to super-intelligence without our even knowing it — a scary reason all by itself to slow down our hurly-burley race to artificial intelligence.

In the end, we're left with a dangerous bargain to make or not make: Do we risk our safety in exchange for the possibility that AI will solve problems we can't?

How tiny bioelectronic implants may someday replace pharmaceutical drugs

Scientists are using bioelectronic medicine to treat inflammatory diseases, an approach that capitalizes on the ancient "hardwiring" of the nervous system.

Left: The vagus nerve, the body's longest cranial nerve. Right: Vagus nerve stimulation implant by SetPoint Medical.

Credit: Adobe Stock / SetPoint Medical
Sponsored by Northwell Health
  • Bioelectronic medicine is an emerging field that focuses on manipulating the nervous system to treat diseases.
  • Clinical studies show that using electronic devices to stimulate the vagus nerve is effective at treating inflammatory diseases like rheumatoid arthritis.
  • Although it's not yet approved by the US Food and Drug Administration, vagus nerve stimulation may also prove effective at treating other diseases like cancer, diabetes and depression.
Keep reading Show less

"Forced empathy" is a powerful negotiation tool. Here's how to do it.

Master negotiator Chris Voss breaks down how to get what you want during negotiations.

Credit: Paul Craft / Shutterstock
Personal Growth
  • Former FBI negotiator Chris Voss explains how forced empathy is a powerful negotiating tactic.
  • The key is starting a sentence with "What" or "How," causing the other person to look at the situation through your eyes.
  • What appears to signal weakness is turned into a strength when using this tactic.
Keep reading Show less

Best. Science. Fiction. Show. Ever.

"The Expanse" is the best vision I've ever seen of a space-faring future that may be just a few generations away.

Credit: "The Expanse" / Syfy
13-8
  • Want three reasons why that headline is justified? Characters and acting, universe building, and science.
  • For those who don't know, "The Expanse" is a series that's run on SyFy and Amazon Prime set about 200 years in the future in a mostly settled solar system with three waring factions: Earth, Mars, and Belters.
  • No other show I know of manages to use real science so adeptly in the service of its story and its grand universe building.
Keep reading Show less

How exercise changes your brain biology and protects your mental health

Contrary to what some might think, the brain is a very plastic organ.

PRAKASH MATHEMA/AFP via Getty Images
Mind & Brain

As with many other physicians, recommending physical activity to patients was just a doctor chore for me – until a few years ago. That was because I myself was not very active.

Keep reading Show less
Surprising Science

Here's a 10-step plan to save our oceans

By 2050, there may be more plastic than fish in the sea.

Quantcast