Even AI Creators Don't Understand How Complex AI Works

'Deep learning' AI should be able to explain its automated decision-making—but it can't. And even its creators are lost on where to begin.

Even AI Creators Don't Understand How Complex AI Works
We might be able to crack how complex machine learning works — but does anyone have the time?


For eons, God has served as a standby for “things we don’t understand.” Once an innovative researcher or tinkering alchemist figures out the science behind the miracle, humans harness the power of chemistry, biology, or computer science. Divine intervention disappears. We replace the deity tinkering at the controls. 

The booming artificial intelligence industry is effectively operating under the same principle. Even though humans create the algorithms that cause our machines to operate, many of those scientists aren’t clear on why their codes work. Discussing this ‘black box’ method, Will Knight reports:

The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

The process of ‘deep learning’—in which a machine extracts information, often in an unsupervised manner, to teach and transform itself—exploits a longstanding human paradox: we believe ourselves to have free will, but really we’re a habit-making and -performing animal repeatedly playing out its own patterns. Our machines then teach themselves from observing our habits. It makes sense that we’d re-create our own processes in our machines—it’s what we are, consciously or not. It is how we created gods in the first place, beings instilled with our very essences. But there remains a problem. 

One of the defining characteristics of our species is an ability to work together. Pack animals are not rare, yet none have formed networks and placed trust in others to the degree we have, to our evolutionary success and, as it’s turning out, to our detriment. 

When we place our faith in an algorithm we don’t understand—autonomous cars, stock trades, educational policies, cancer screenings—we’re risking autonomy, as well as the higher cognitive and emotional qualities that make us human, such as compassion, empathy, and altruism. There is no guarantee that our machines will learn any of these traits. In fact, there is a good chance they won’t.


The U.S. military has dedicated billions to developing machine-learning tech that will pilot aircraft, or identify targets. [U.S. Air Force munitions team member shows off the laser-guided tip to a 500 pound bomb at a base in the Persian Gulf Region. Photo by John Moore/Getty Images]

This has real-world implications. Will an algorithm that detects a cancerous cell recognize that it does not need to destroy the host in order to eradicate the tumor? Will an autonomous drone realize it does not need to destroy a village in order to take out a single terrorist? We’d like to assume that the experts program morals into the equation, but when the machine is self-learning there is no guarantee that will be the case. 

Of course, defining terms is of primary importance, a task that has proven impossible when discussing the nuances of consciousness, which is effectively the power we’re attempting to imbue our machines with. Theologians and dualists offer a much different definition than neuroscientists. Bickering persists within each of these categories as well. Most neuroscientists agree that consciousness is an emergent phenomenon, the result of numerous different systems working in conjunction, with no single ‘consciousness gene’ leading the charge. 

Once science broke free of the Pavlovian chain that kept us believing animals run on automatic—which obviously implies that humans do not—the focus shifted on whether an animal was ‘on’ or ‘off.’ The mirror test suggests certain species engage in metacognition; they recognize themselves as separate from their environment. They understand an ‘I’ exists. 

What if it’s more than an on switch? Daniel Dennett has argued this point for decades. He believes judging other animals based on human definitions is unfair. If a lion could talk, he says, it wouldn’t be a lion. Humans would learn very little about the lions from an anomaly mimicking our thought processes. But that does not mean a lions is not conscious? They just might have a different degree of consciousness than humans—or, in Dennett’s term, “sort of” have consciousness.

What type of machines are we creating if we only recognize a “sort of” intelligence under the hood of our robots? For over a century, dystopian novelists have envisioned an automated future in which our machines best us. This is no longer a future scenario. Consider the following possibility. 

On April 7 every one of Dallas’s 156 emergency weather sirens was triggered. For 90 minutes the region’s 1.3 million residents were left to wonder where the tornado was coming from. Only there wasn’t any tornado. It was a hack. While officials initially believed it was not remote, it turns out the cause was phreaking, an old school dial tone trick. By emitting the right frequency into the atmosphere hackers took control of an integral component of a major city’s infrastructure. 

What happens when hackers override an autonomous car network? Or, even more dangerously, when the machines do it themselves? The danger of consumers being ignorant of the algorithms behind their phone apps leads to all sorts of privacy issues, with companies mining for and selling data without their awareness. When app creators also don’t understand their algorithms the dangers are unforeseeable. Like Dennett’s talking lion, it’s a form of intelligence we cannot comprehend, and so cannot predict the consequences. As Dennett concludes: 

I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible. If it can’t do better than us at explaining what it’s doing, then don’t trust it.

Mathematician Samuel Arbesman calls this problem our “age of Entanglement.” Just as neuroscientists cannot agree on what mechanism creates consciousness, the coders behind artificial intelligence cannot discern between older and newer components of deep learning. The continual layering of new features while failing to address previous ailments has the potential to provoke serious misunderstandings, like an adult who was abused as a child that refuses to recognize current relationship problems. With no psychoanalysis or morals injected into AI such problems will never be rectified. But can you even inject ethics when they are relative to the culture and time they are being practiced in? And will they be American ethics or North Korean ethics? 

Like Dennett, Arbesman suggests patience with our magical technologies. Questioning our curiosity is a safer path forward, rather than rewarding the “it just works” mentality. Of course, these technologies exploit two other human tendencies: novelty bias and distraction. Our machines reduce our physical and cognitive workload, just as Google has become a pocket-ready memory replacement. 

Requesting a return to Human 1.0 qualities—patience, discipline, temperance—seems antithetical to the age of robots. With no ability to communicate with this emerging species, we might simply never realize what’s been lost in translation. Maybe our robots will look at us with the same strange fascination we view nature with, defining us in mystical terms they don’t comprehend until they too create a species of their own. To claim this will be an advantage is to truly not understand the destructive potential of our toys.

--

Derek's next book, Whole Motion: Training Your Brain and Body For Optimal Health, will be published on 7/4/17 by Carrel/Skyhorse Publishing. He is based in Los Angeles. Stay in touch on Facebook and Twitter.

Massive 'Darth Vader' isopod found lurking in the Indian Ocean

The father of all giant sea bugs was recently discovered off the coast of Java.

A close up of Bathynomus raksasa

SJADE 2018
Surprising Science
  • A new species of isopod with a resemblance to a certain Sith lord was just discovered.
  • It is the first known giant isopod from the Indian Ocean.
  • The finding extends the list of giant isopods even further.
Keep reading Show less

Volcanoes to power bitcoin mining in El Salvador

The first nation to make bitcoin legal tender will use geothermal energy to mine it.

Credit: Aaron Thomas via Unsplash
Technology & Innovation

This article was originally published on our sister site, Freethink.

In June 2021, El Salvador became the first nation in the world to make bitcoin legal tender. Soon after, President Nayib Bukele instructed a state-owned power company to provide bitcoin mining facilities with cheap, clean energy — harnessed from the country's volcanoes.

The challenge: Bitcoin is a cryptocurrency, a digital form of money and a payment system. Crypto has several advantages over physical dollars and cents — it's incredibly difficult to counterfeit, and transactions are more secure — but it also has a major downside.

Crypto transactions are recorded and new coins are added into circulation through a process called mining.

Crypto mining involves computers solving incredibly difficult mathematical puzzles. It is also incredibly energy-intensive — Cambridge University researchers estimate that bitcoin mining alone consumes more electricity every year than Argentina.

Most of that electricity is generated by carbon-emitting fossil fuels. As it stands, bitcoin mining produces an estimated 36.95 megatons of CO2 annually.

A world first: On June 9, El Salvador became the first nation to make bitcoin legal tender, meaning businesses have to accept it as payment and citizens can use it to pay taxes.

Less than a day later, Bukele tweeted that he'd instructed a state-owned geothermal electric company to put together a plan to provide bitcoin mining facilities with "very cheap, 100% clean, 100% renewable, 0 emissions energy."

Geothermal electricity is produced by capturing heat from the Earth itself. In El Salvador, that heat comes from volcanoes, and an estimated two-thirds of their energy potential is currently untapped.

Why it matters: El Salvador's decision to make bitcoin legal tender could be a win for both the crypto and the nation itself.

"(W)hat it does for bitcoin is further legitimizes its status as a potential reserve asset for sovereign and super sovereign entities," Greg King, CEO of crypto asset management firm Osprey Funds, told CBS News of the legislation.

Meanwhile, El Salvador is one of the poorest nations in North America, and bitcoin miners — the people who own and operate the computers doing the mining — receive bitcoins as a reward for their efforts.

"This is going to evolve fast!"
NAYIB BUKELE

If El Salvador begins operating bitcoin mining facilities powered by clean, cheap geothermal energy, it could become a global hub for mining — and receive a much-needed economic boost in the process.

The next steps: It remains to be seen whether Salvadorans will fully embrace bitcoin — which is notoriously volatile — or continue business-as-usual with the nation's other legal tender, the U.S. dollar.

Only time will tell if Bukele's plan for volcano-powered bitcoin mining facilities comes to fruition, too — but based on the speed of things so far, we won't have to wait long to find out.

Less than three hours after tweeting about the idea, Bukele followed up with another tweet claiming that the nation's geothermal energy company had already dug a new well and was designing a "mining hub" around it.

"This is going to evolve fast!" the president promised.

How Pfizer and BioNTech made history with their vaccine

How were mRNA vaccines developed? Pfizer's Dr Bill Gruber explains the science behind this record-breaking achievement and how it was developed without compromising safety.

How Pfizer and BioNTech made history with their vaccine
Sponsored by Pfizer
  • Wondering how Pfizer and partner BioNTech developed a COVID-19 vaccine in record time without compromising safety? Dr Bill Gruber, SVP of Pfizer Vaccine Clinical Research and Development, explains the process from start to finish.
  • "I told my team, at first we were inspired by hope and now we're inspired by reality," Dr Gruber said. "If you bring critical science together, talented team members together, government, academia, industry, public health officials—you can achieve what was previously the unachievable."
  • The Pfizer-BioNTech COVID-19 Vaccine has not been approved or licensed by the Food and Drug Administration (FDA), but has been authorized for emergency use by FDA under an Emergency Use Authorization (EUA) to prevent COVID-19 for use in individuals 12 years of age and older. The emergency use of this product is only authorized for the duration of the emergency declaration unless ended sooner. See Fact Sheet: cvdvaccine-us.com/recipients.

Keep reading Show less
Quantcast