Skip to content
The Future

“It became like a God”

For now, artificial intelligence is nothing to fear. But as it rapidly develops in the years ahead?
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Let’s face it, Artificial Intelligence is everywhere.


Outdoors with ads for all sorts of business applications. Google’s DeepMind and IBM projects. Countless research departments in universities across the planet. The $1.3 billion Human Brain Project from the European Union. The list goes on and on.

Between 2012-2016, funding for AI research, both private and federal, grew at a rate of about 50 percent per year. We hear that machine learning, self-adaptive neural nets, and data mining are changing the world. Google’s DeepMind opens its website with a bold statement: “AI could be one of humanity’s most useful inventions. We research and build safe AI systems that learn how to solve problems and advance scientific discovery for all.”

If that’s the case, why are so many people either skeptical or scared of the whole thing? Elon Musk has equated AI with “summoning the demon” and creating “an immortal dictator from which we can never escape.” Stephen Hawking famously declared that “AI could spell the end of the human race.”

These are very heavy words. A serious existential risk for humankind? Let’s look closer.

First, we must distinguish between types of AI, particularly between Artificial General Intelligence (AGI) and the simpler and more realistic Artificial Narrow Intelligence (ANI). Over a year ago, Tad Friend wrote a very informative essay for The New Yorker, distinguishing the two.

ANI defines smart computer programs that use neural nets and machine learning techniques to search for optimized solutions to a variety of problems. These can go from beating a chess master or a Jeopardy champion to zeroing in on a challenging medical diagnostic to improving a consumer’s profiling so as to aim more targeted ads to his/her social media.

ANI is thriving—it’s the kind of AI we see advertised everywhere, offering all kinds of solutions to increase a company’s bottom line. It’s no wonder why mega-companies like Google, Yahoo, and IBM are going after ANI. Of course, it’s mostly just commercial hype. Machine learning methods do lead to more efficient, machine-controlled algorithms, but they are a very far cry from what we believe AGI would entail. They are no evil dictator driven to exterminate useless carbon-based, dumb humans. Not even close. At least not at present.

Creative autonomy

The harder questions begin as we ramp up ANI to become ever more powerful. Given that many of the various programs used by ANI search for their own optimized pathways, human programmers often don’t know how their machines arrived at a given result. Ke Jie, the Chinese Go master who lost to Google’s AlphaGo in 2017, summed it up after his defeat: “Last year, it was quite humanlike when it played. But this year, it became like a god of Go.”

Put it differently, the programs in many ANI applications act as a black box, developing their own hidden strategies. The Google programmers responsible for AlphaGo were dumbfounded by how the program developed its own winning approach, well beyond their own playing ability.

A paper published last year—under the title “The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities”—reveals amazing cases where programs evolved under their own rules in ways that seem to mimic biological evolution in nature. The important point to note here is that these programs apparently developed some sort of creative autonomy: they somehow “understood” that to achieve their goal, they had to break the rules and create new ones.

These results show that we can learn a lot from ANIs. In a sense, they are giving us new pathways to understand different kinds of intelligence and even the roots of creativity. As long as they remain on task and use their self-triggered autonomy within well-established and controlled boundaries, all is well. Contention begins when their sophistication goes beyond staying on task and they “decide” that they are ready to take on other jobs.

The scary question

If you are wondering whether programs can “decide,” well, we have seen that they can, especially as they develop their own black box pathways to accomplish their mission. This means that they “know” (and I use quotes to remind us that knowing is a loaded word in this context) which way is the best one to go. If they become “empowered” by success—can programs become vain?—they may decide to try their luck in other ventures. As their adaptive learning neural nets evolve and they continue to solve new problems and find new challenges, what’s to stop them? That’s the question that scares lots of people.

Thankfully, we have no idea if this critical point in decision-making is ever achievable with a given ANI. It may be that the very architecture of ANIs will preclude such jump in functionality. A lot of the fear of AGIs comes from anthropomorphizing the technology, making computers more and more like humans, with all the good and the bad that comes with it.

There is a huge difference between a program that is so sophisticated that it can emulate human behavior (talk and “feel” like we do), and one that actually achieves a kind of autonomy that forges a new, and unknown, path ahead. The fear of an AGI is that the creature becomes a monster, that we lose power over what we invented. Once we create an alternative form of true intelligence, there is no telling what this intelligence will be like, what kind of value system, if any, it will attach itself to.

A machine doesn’t have millions of years of socializing evolution behind its back, doesn’t need to be altruistic to its kind, or to protect its tribe. It only has itself to care for, a real dangerous proposition. Those who believe we may train or control such intelligence with safeguards may be as deluded as Viktor Frankenstein was. How could we aspire to censor an intelligence vastly superior to ours?

Fortunately, we are not even beginning to understand what building such a machine would take—or whether an emergent transition in complexity can truly make an ANI into an AGI. Experts from the private and military sectors are no doubt working on this issue, knowing only too well that even though this is a faraway dream, whoever controls the technology first will surely control the world. Unfortunately, this control would be very short-lived.

That’s not a world we would want to live in.

The post “It Became Like a God” appeared first on ORBITER.

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Related
The integration of artificial intelligence into public health could have revolutionary implications for the global south—if only it can get online.