The biggest A.I. risks: Superintelligence and the elite silos

When it comes to raising superintelligent A.I., kindness may be our best bet.

  • We have no guarantee that a superintelligent A.I. is going to do what we want. Once we create something many times more intelligent than we are, it may be "insane" to think we can control what it does.
  • What's the best bet to ensure superintelligent A.I. remains compliant with humans and does good works, such as advance medicine? To raise it in a way that's imbued with compassion and understanding, says Goertzel.
  • One way to limit "people doing bad things out of frustration," it may be advantageous for the entire world to be plugged into the A.I. economy so that developers, from whatever country, can monetize their codes.

Artificial general intelligence: The domain of the patient, philosophical coder

Sure, some expert-level knowledge is needed if you want to program artificial intelligence. But AI expert Ben Goertzel posits that you also need something that Guns N' Roses sang about: a lil' patience.

Sure, some expert-level knowledge is needed if you want to program artificial intelligence. But AI expert Ben Goertzel posits that you also need something that Guns N' Roses sang about: a lil' patience. If you want instant gratification, this isn't the line of work for you. Ben's most recent book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.

It’s Already Too Late to Stop the Singularity

We cannot rule out the possibility that a superintelligence will do some very bad things, says AGI expert Ben Goertzel. But we can't stop the research now – even if we wanted to.

Let’s just go ahead and address the question on everyone’s mind: will AI kill us? What is the negative potential of transhuman superintelligence? Once its cognitive power surpasses our own, will it give us a leg-up in 'the singularity', or will it look at our collective track record of harming our own species, other species, the world that gave us life, etc., and exterminate us like pests? AI expert Ben Goertzel believes we’ve been at this point of uncertainty many times before in our evolution. When we stepped out of our caves, it was a risk – no one knew it would lead to cities and space flight. When we spoke the first word, took up agriculture, invented the printing press, flicked the internet on-switch – all of these things could have led to our demise, and in some sense, our eventual demise can be traced all the way back to the day that ancient human learnt how to make fire. Progress helps us, until the day it kills us. That said, fear of negative potential cannot stop us from attempting forward motion – and by now, says Goertzel, it’s too late anyway. Even if the U.S. decided to pull the plug on superhuman intelligence research, China would keep at it. Even if China pulled out, Russia, Australia, Brazil, Nigeria would march on. We know there are massive benefits – both humanitarian and corporate – and we have latched to the idea. "The way we got to this point as a species and a culture has been to keep doing amazing new things that we didn’t fully understand," says Goertzel, and for better or worse, "that’s what we’re going to keep on doing." Ben Goertzel's most recent book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.

Keep reading Show less

AI Will Surpass Human Ability Before the Century Is Over

One day this century, a robot of super-human intelligence will offer you the chance to upgrade your mind, says AGI expert Ben Goertzel. Will you take it?

For all the talk of AI, it always seems that gossip is faster than progress. But it could be that within this century, we will fully realize the visions science fiction has promised us, says Dr. Ben Goertzel – for better or worse. Humanity will always create and invent, but the last invention of necessity will be a human-level Artificial General Intelligence mind, which will be able to create a new AIG with super-human intelligence, and continually create smarter and smarter versions of itself. It will provide all basic human needs – food, shelter, water – and those of us who wish to experience a higher echelon of consciousness and intelligence will be able to upgrade to become super-human. Or, perhaps there will be war – there’s a bit of uncertainty there, admits Goertzel. "There’s a lot of work to get to the point where intelligence explodes… But I do think it’s reasonably probable we can get there in my lifetime, which is rather exciting," he says. Ben Goertzel's most recent book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.

Keep reading Show less