Skip to content
The Future

Even AI Creators Don’t Understand How Complex AI Works

‘Deep learning’ AI should be able to explain its automated decision-making—but it can’t. And even its creators are lost on where to begin.
We might be able to crack how complex machine learning works — but does anyone have the time?
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

For eons, God has served as a standby for “things we don’t understand.” Once an innovative researcher or tinkering alchemist figures out the science behind the miracle, humans harness the power of chemistry, biology, or computer science. Divine intervention disappears. We replace the deity tinkering at the controls. 

The booming artificial intelligence industry is effectively operating under the same principle. Even though humans create the algorithms that cause our machines to operate, many of those scientists aren’t clear on why their codes work. Discussing this ‘black box’ method, Will Knight reports:

The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

The process of ‘deep learning’—in which a machine extracts information, often in an unsupervised manner, to teach and transform itself—exploits a longstanding human paradox: we believe ourselves to have free will, but really we’re a habit-making and -performing animal repeatedly playing out its own patterns. Our machines then teach themselves from observing our habits. It makes sense that we’d re-create our own processes in our machines—it’s what we are, consciously or not. It is how we created gods in the first place, beings instilled with our very essences. But there remains a problem. 

One of the defining characteristics of our species is an ability to work together. Pack animals are not rare, yet none have formed networks and placed trust in others to the degree we have, to our evolutionary success and, as it’s turning out, to our detriment. 

When we place our faith in an algorithm we don’t understand—autonomous cars, stock trades, educational policies, cancer screenings—we’re risking autonomy, as well as the higher cognitive and emotional qualities that make us human, such as compassion, empathy, and altruism. There is no guarantee that our machines will learn any of these traits. In fact, there is a good chance they won’t.


The U.S. military has dedicated billions to developing machine-learning tech that will pilot aircraft, or identify targets. [U.S. Air Force munitions team member shows off the laser-guided tip to a 500 pound bomb at a base in the Persian Gulf Region. Photo by John Moore/Getty Images]

This has real-world implications. Will an algorithm that detects a cancerous cell recognize that it does not need to destroy the host in order to eradicate the tumor? Will an autonomous drone realize it does not need to destroy a village in order to take out a single terrorist? We’d like to assume that the experts program morals into the equation, but when the machine is self-learning there is no guarantee that will be the case. 

Of course, defining terms is of primary importance, a task that has proven impossible when discussing the nuances of consciousness, which is effectively the power we’re attempting to imbue our machines with. Theologians and dualists offer a much different definition than neuroscientists. Bickering persists within each of these categories as well. Most neuroscientists agree that consciousness is an emergent phenomenon, the result of numerous different systems working in conjunction, with no single ‘consciousness gene’ leading the charge. 

Once science broke free of the Pavlovian chain that kept us believing animals run on automatic—which obviously implies that humans do not—the focus shifted on whether an animal was ‘on’ or ‘off.’ The mirror test suggests certain species engage in metacognition; they recognize themselves as separate from their environment. They understand an ‘I’ exists. 

What if it’s more than an on switch? Daniel Dennett has argued this point for decades. He believes judging other animals based on human definitions is unfair. If a lion could talk, he says, it wouldn’t be a lion. Humans would learn very little about the lions from an anomaly mimicking our thought processes. But that does not mean a lions is not conscious? They just might have a different degree of consciousness than humans—or, in Dennett’s term, “sort of” have consciousness.

What type of machines are we creating if we only recognize a “sort of” intelligence under the hood of our robots? For over a century, dystopian novelists have envisioned an automated future in which our machines best us. This is no longer a future scenario. Consider the following possibility. 

On April 7 every one of Dallas’s 156 emergency weather sirens was triggered. For 90 minutes the region’s 1.3 million residents were left to wonder where the tornado was coming from. Only there wasn’t any tornado. It was a hack. While officials initially believed it was not remote, it turns out the cause was phreaking, an old school dial tone trick. By emitting the right frequency into the atmosphere hackers took control of an integral component of a major city’s infrastructure. 

What happens when hackers override an autonomous car network? Or, even more dangerously, when the machines do it themselves? The danger of consumers being ignorant of the algorithms behind their phone apps leads to all sorts of privacy issues, with companies mining for and selling data without their awareness. When app creators also don’t understand their algorithms the dangers are unforeseeable. Like Dennett’s talking lion, it’s a form of intelligence we cannot comprehend, and so cannot predict the consequences. As Dennett concludes: 

I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible. If it can’t do better than us at explaining what it’s doing, then don’t trust it.

Mathematician Samuel Arbesman calls this problem our “age of Entanglement.” Just as neuroscientists cannot agree on what mechanism creates consciousness, the coders behind artificial intelligence cannot discern between older and newer components of deep learning. The continual layering of new features while failing to address previous ailments has the potential to provoke serious misunderstandings, like an adult who was abused as a child that refuses to recognize current relationship problems. With no psychoanalysis or morals injected into AI such problems will never be rectified. But can you even inject ethics when they are relative to the culture and time they are being practiced in? And will they be American ethics or North Korean ethics? 

Like Dennett, Arbesman suggests patience with our magical technologies. Questioning our curiosity is a safer path forward, rather than rewarding the “it just works” mentality. Of course, these technologies exploit two other human tendencies: novelty bias and distraction. Our machines reduce our physical and cognitive workload, just as Google has become a pocket-ready memory replacement. 

Requesting a return to Human 1.0 qualities—patience, discipline, temperance—seems antithetical to the age of robots. With no ability to communicate with this emerging species, we might simply never realize what’s been lost in translation. Maybe our robots will look at us with the same strange fascination we view nature with, defining us in mystical terms they don’t comprehend until they too create a species of their own. To claim this will be an advantage is to truly not understand the destructive potential of our toys.

Derek’s next book, Whole Motion: Training Your Brain and Body For Optimal Health, will be published on 7/4/17 by Carrel/Skyhorse Publishing. He is based in Los Angeles. Stay in touch on Facebook and Twitter.

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Related

Up Next