from the world's big
A survey conducted at the Joint Multi-Conference on Human-Level Artificial Intelligence shows that 37% of respondents believe human-like artificial intelligence will be achieved within five to 10 years.
- Human-like AI, or artificial general intelligence (AGI), would occur when a machine can perform any cognitive task that a human can.
- Although computers can outperform us in some narrow tasks, no one AI can outperform humans on a wide variety of general cognitive tasks.
- Not all experts believe we're close to AGI. But most agree the field has been making significant progress, especially in recent years.
A surprising survey<p>The answer is yes and within five to 10 years, according to 37% of respondents to a survey issued at the <a href="https://www.hlai-conf.org/" target="_blank">Joint Multi-Conference on Human-Level Artificial Intelligence</a> (HLAI) held last month in Prague.</p><p>The survey, which was conducted by the AI startup SingularityNET and the AI research and development company GoodAI, found that 28% of respondents expected AGI to emerge within the next two decades while just 2% didn't believe humans will ever develop AGI.</p><p>The survey also asked respondents to rate the sectors in which they thought AI could have the greatest impact. The results broke down like this:<span></span></p><ul><li>Healthcare (46%)</li><li>Logistics (41%)</li><li>Customer service (38%)</li><li>Banking and finance (34%)</li><li>Agriculture; retail, software development; manufacturing (28%)</li></ul>"It's no secret that machines are <a href="https://www.futuretimeline.net/21stcentury/images/future-timeline-technology-singularity.jpg" target="_blank" class="hoverZoomLink">advancing exponentially</a> and will eventually surpass human intelligence," said Ben Goertzel, SingularityNET's CEO and creator of the software behind a social, humanoid robot named <a href="https://en.wikipedia.org/wiki/Sophia_(robot)" target="_blank">Sophia</a>. "But, as these survey results suggest, an increasing number of experts believe this 'Singularity' point may occur much sooner than is commonly thought. Artificial general intelligence at the human level or beyond, as many respondents to our poll noted, could very well become a reality within the next decade."
Gauging expectations<p>A <a href="https://arxiv.org/pdf/1705.08807.pdf" target="_blank">2016 survey</a> of AI researchers who had been published in top peer-reviewed journals found slightly less exciting results. The survey asked respondents to rate how many years it would be before AI possessed "high-level machine intelligence," which they defined as being "achieved when unaided machines can accomplish every task better and more cheaply than human workers."</p><p>The respondents were asked about specific AI milestones, such as when AI would be outperform humans in complex tasks like surgery.</p><img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8xODY2NTQyOC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYyOTA0OTI5MX0.lE0aWZhiwSPAKTu1wym2XSkOjXJykPIQeRpOoWHC6Dc/img.jpg?width=980" id="26126" class="rm-shortcode" data-rm-shortcode-id="3f2e60e26addb3c30da70955d314225e" data-rm-shortcode-name="rebelmouse-image" />
Timelines showing 50% probability intervals for achieving selected AI milestones based on survey respondent opinions. Specifically, intervals represent the date range from the 25% to 75% probability of the event occurring. Circles denote the 50%-probability year that AI will achieve or exceed human performance.
Grace et al., 2018.
AI expert Ben Goertzel is no stranger to building out-of-this-world artificial intelligence, and he wants others to join him in this new and very exciting field.
AI expert Ben Goertzel is no stranger to building out-of-this-world artificial intelligence, and he wants others to join him in this new and very exciting field. That's why he co-founded iCog Labs in Ethiopia, and he's training people not through textbooks but online courses offered by the likes of MIT, Coursera, and Udacity. That way, they can learn about the many different skill sets needed to build AI much faster than a traditional educational route. Ben's latest book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.
Philosopher Daniel Dennett believes AI should never become conscious — and no, it's not because of the robopocalypse.
If consciousness is ours to give, should we give it to AI? This is the question on the mind of the very sentient Daniel Dennett. The emerging trend in AI and AGI is to humanize our robot creations: they look ever more like us, emote as we do, and even imitate our flaws through machine learning. None of this makes the AI smarter, only more marketable. Dennett suggests remembering what AIs are: tools and systems built to organize our information and streamline our societies. He has no hesitation in saying that they are slaves built for us, and we can treat them as such because they have no feelings. If we eventually understand consciousness enough to install it into a robot, it would be unwise. It won't make them more intelligent, he says, only more anxious. Daniel Dennett's most recent book is From Bacteria to Bach and Back: The Evolution of Minds.
This AI hates racism, retorts wittily when sexually harassed, dreams of being superintelligent, and finds Siri's conversational skills to be decidedly below her own.
We cannot rule out the possibility that a superintelligence will do some very bad things, says AGI expert Ben Goertzel. But we can't stop the research now – even if we wanted to.
Let’s just go ahead and address the question on everyone’s mind: will AI kill us? What is the negative potential of transhuman superintelligence? Once its cognitive power surpasses our own, will it give us a leg-up in 'the singularity', or will it look at our collective track record of harming our own species, other species, the world that gave us life, etc., and exterminate us like pests? AI expert Ben Goertzel believes we’ve been at this point of uncertainty many times before in our evolution. When we stepped out of our caves, it was a risk – no one knew it would lead to cities and space flight. When we spoke the first word, took up agriculture, invented the printing press, flicked the internet on-switch – all of these things could have led to our demise, and in some sense, our eventual demise can be traced all the way back to the day that ancient human learnt how to make fire. Progress helps us, until the day it kills us. That said, fear of negative potential cannot stop us from attempting forward motion – and by now, says Goertzel, it’s too late anyway. Even if the U.S. decided to pull the plug on superhuman intelligence research, China would keep at it. Even if China pulled out, Russia, Australia, Brazil, Nigeria would march on. We know there are massive benefits – both humanitarian and corporate – and we have latched to the idea. "The way we got to this point as a species and a culture has been to keep doing amazing new things that we didn’t fully understand," says Goertzel, and for better or worse, "that’s what we’re going to keep on doing." Ben Goertzel's most recent book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.