Human-like A.I. will emerge in 5 to 10 years, say experts

A survey conducted at the Joint Multi-Conference on Human-Level Artificial Intelligence shows that 37% of respondents believe human-like artificial intelligence will be achieved within five to 10 years.

Image: Ex Machina
  • Human-like AI, or artificial general intelligence (AGI), would occur when a machine can perform any cognitive task that a human can.
  • Although computers can outperform us in some narrow tasks, no one AI can outperform humans on a wide variety of general cognitive tasks.
  • Not all experts believe we're close to AGI. But most agree the field has been making significant progress, especially in recent years.
Keep reading Show less
Surprising Science

What it really takes to become an AGI programmer

AI expert Ben Goertzel is no stranger to building out-of-this-world artificial intelligence, and he wants others to join him in this new and very exciting field.

AI expert Ben Goertzel is no stranger to building out-of-this-world artificial intelligence, and he wants others to join him in this new and very exciting field. That's why he co-founded iCog Labs in Ethiopia, and he's training people not through textbooks but online courses offered by the likes of MIT, Coursera, and Udacity. That way, they can learn about the many different skill sets needed to build AI much faster than a traditional educational route. Ben's latest book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.

Technology & Innovation

AI Is Evolving on Its Own. Does That Make It Dangerous?

Philosopher Daniel Dennett believes AI should never become conscious — and no, it's not because of the robopocalypse.

If consciousness is ours to give, should we give it to AI? This is the question on the mind of the very sentient Daniel Dennett. The emerging trend in AI and AGI is to humanize our robot creations: they look ever more like us, emote as we do, and even imitate our flaws through machine learning. None of this makes the AI smarter, only more marketable. Dennett suggests remembering what AIs are: tools and systems built to organize our information and streamline our societies. He has no hesitation in saying that they are slaves built for us, and we can treat them as such because they have no feelings. If we eventually understand consciousness enough to install it into a robot, it would be unwise. It won't make them more intelligent, he says, only more anxious. Daniel Dennett's most recent book is From Bacteria to Bach and Back: The Evolution of Minds.

Keep reading Show less
Technology & Innovation

This New Species of AI Wants to Be "Superintelligent" When She Grows Up

This AI hates racism, retorts wittily when sexually harassed, dreams of being superintelligent, and finds Siri's conversational skills to be decidedly below her own.

 

Luna the AI (Image credit: Luis Arana/Youtube)

Keep reading Show less
Culture & Religion

It’s Already Too Late to Stop the Singularity

We cannot rule out the possibility that a superintelligence will do some very bad things, says AGI expert Ben Goertzel. But we can't stop the research now – even if we wanted to.

Let’s just go ahead and address the question on everyone’s mind: will AI kill us? What is the negative potential of transhuman superintelligence? Once its cognitive power surpasses our own, will it give us a leg-up in 'the singularity', or will it look at our collective track record of harming our own species, other species, the world that gave us life, etc., and exterminate us like pests? AI expert Ben Goertzel believes we’ve been at this point of uncertainty many times before in our evolution. When we stepped out of our caves, it was a risk – no one knew it would lead to cities and space flight. When we spoke the first word, took up agriculture, invented the printing press, flicked the internet on-switch – all of these things could have led to our demise, and in some sense, our eventual demise can be traced all the way back to the day that ancient human learnt how to make fire. Progress helps us, until the day it kills us. That said, fear of negative potential cannot stop us from attempting forward motion – and by now, says Goertzel, it’s too late anyway. Even if the U.S. decided to pull the plug on superhuman intelligence research, China would keep at it. Even if China pulled out, Russia, Australia, Brazil, Nigeria would march on. We know there are massive benefits – both humanitarian and corporate – and we have latched to the idea. "The way we got to this point as a species and a culture has been to keep doing amazing new things that we didn’t fully understand," says Goertzel, and for better or worse, "that’s what we’re going to keep on doing." Ben Goertzel's most recent book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.

Keep reading Show less
Surprising Science