It’s Already Too Late to Stop the Singularity
We cannot rule out the possibility that a superintelligence will do some very bad things, says AGI expert Ben Goertzel. But we can't stop the research now – even if we wanted to.
Ben Goertzel: Some people are gravely worried about the uncertainty and the negative potential associated with transhuman, superhuman AGI. And indeed we are stepping into a great unknown realm.
It’s almost like a Rorschach type of thing really. I mean we fundamentally don’t know what a superhuman AI is going to do and that’s the truth of it, right. And then if you tend to be an optimist you will focus on the good possibilities. If you tend to be a worried person who’s pessimistic you’ll focus on the bad possibilities. If you tend to be a Hollywood movie maker you focus on scary possibilities maybe with a happy ending because that’s what sells movies. We don’t know what’s going to happen.
I do think however this is the situation humanity has been in for a very long time. When the cavemen stepped out of their caves and began agriculture we really had no idea that was going to lead to cities and space flight and so forth. And when the first early humans created language to carry out simple communication about the moose they had just killed over there they did not envision Facebook, differential calculus and MC Hammer and all the rest, right. I mean there’s so much that has come about out of early inventions which humans couldn’t have ever foreseen. And I think we’re just in the same situation. I mean the invention of language or civilization could have led to everyone’s death, right. And in a way it still could. And the creation of superhuman AI it could kill everyone and I don’t want it to. Almost none of us do.
Of course the way we got to this point as a species and a culture has been to keep doing amazing new things that we didn’t fully understand. And that’s what we’re going to keep on doing. Nick Bostrom’s book was influential but I felt that in some ways it was a bit deceptive the way he phrased things. If you read his precise philosophical arguments which are very logically drawn what Bostrom says in his book, Superintelligence, is that we cannot rule out the possibility that a superintelligence will do some very bad things. And that’s true. On the other hand some of the associated rhetoric makes it sound like it’s very likely a superintelligence will do these bad things. And if you follow his philosophical arguments closely he doesn’t show that. What he just shows is that you can’t rule it out and we don’t know what’s going on.
I don’t think Nick Bostrom or anyone else is going to stop the human race from developing advanced AI because it’s a source of tremendous intellectual curiosity but also of tremendous economic advantage. So if let’s say President Trump decided to ban artificial intelligence research – I don’t think he’s going to but suppose he did. China will keep doing artificial intelligence research. If U.S. and China ban it, you know, Africa will do it. Everywhere around the world has AI textbooks and computers. And everyone now knows you can make people’s lives better and make money from developing more advanced AI. So there’s no possibility in practice to halt AI development. What we can do is try to direct it in the most beneficial direction according to our best judgment. And that’s part of what leads me to pursue AGI via an open source project such as OpenCog. I respect very much what Google, Baidu, Facebook, Microsoft and these other big companies are doing in AI. There’s many good people there doing good research and with good hearted motivations. But I guess I’m enough of an old leftist raised by socialists and I sort of – I’m skeptical that a company whose main motive is to maximize shareholder value is really going to do the best thing for the human race if they create a human level AI.
I mean they might. On the other hand there’s a lot of other motivations there and a public company in the end has a fiduciary responsibility to their shareholders. All in all I think the odds are better if AI is developed in a way that is owned by the whole human race and can be developed by all of humanity for its own good. And open source software is sort of the closest approximation that we have to that now. So our aspiration is to grow OpenCog into sort of the Linux of AGI and have people all around the world developing it to serve their own local needs and putting their own values and understanding into it as it becomes more and more intelligent.
Certainly this doesn’t give us any guarantee. We can observe things like Linux has fewer bugs than Windows or OSX and it’s open source. So more eyeballs on something sometimes can make it more reliable. But there’s no solid guarantee that making an AGI open source will make the singularity come out well. But my gut feel is that there’s enough hard problems with creating a superhuman AI and having it respect human values and have a relationship of empathy with people as it grows. There’s enough problems there without the young AGI getting wrapped up in competition of country versus country and company versus company and internal politics within companies or militaries. I feel like we don’t want to add these problems of sort of human slash primate social status competition dynamics. We don’t want to add those problems into the challenges that are faced in AGI development.
Let’s just go ahead and address the question on everyone’s mind: will AI kill us? What is the negative potential of transhuman superintelligence? Once its cognitive power surpasses our own, will it give us a leg-up in 'the singularity', or will it look at our collective track record of harming our own species, other species, the world that gave us life, etc., and exterminate us like pests? AI expert Ben Goertzel believes we’ve been at this point of uncertainty many times before in our evolution. When we stepped out of our caves, it was a risk – no one knew it would lead to cities and space flight. When we spoke the first word, took up agriculture, invented the printing press, flicked the internet on-switch – all of these things could have led to our demise, and in some sense, our eventual demise can be traced all the way back to the day that ancient human learnt how to make fire. Progress helps us, until the day it kills us. That said, fear of negative potential cannot stop us from attempting forward motion – and by now, says Goertzel, it’s too late anyway. Even if the U.S. decided to pull the plug on superhuman intelligence research, China would keep at it. Even if China pulled out, Russia, Australia, Brazil, Nigeria would march on. We know there are massive benefits – both humanitarian and corporate – and we have latched to the idea. "The way we got to this point as a species and a culture has been to keep doing amazing new things that we didn’t fully understand," says Goertzel, and for better or worse, "that’s what we’re going to keep on doing." Ben Goertzel's most recent book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.
Ben Goertzel's most recent book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.
Once a week.
Subscribe to our weekly newsletter.
The key? A computational flattening algorithm.
An international team of scholars has read an unopened letter from early modern Europe — without breaking its seal or damaging it in any way — using an automated computational flattening algorithm.
Using machine-learning technology, the genealogy company My Heritage enables users to animate static images of their relatives.
- Deep Nostalgia uses machine learning to animate static images.
- The AI can animate images by "looking" at a single facial image, and the animations include movements such as blinking, smiling and head tilting.
- As deepfake technology becomes increasingly sophisticated, some are concerned about how bad actors might abuse the technology to manipulate the pubic.
My Heritage/Deep Nostalgia<p>But that's not to say the animations are perfect. As with most deep-fake technology, there's still an uncanny air to the images, with some of the facial movements appearing slightly unnatural. What's more, Deep Nostalgia is only able to create deepfakes of one person's face from the neck up, so you couldn't use it to animate group photos, or photos of people doing any sort of physical activity.</p>
My Heritage/Deep Nostalgia<p>But for a free deep-fake service, Deep Nostalgia is pretty impressive, especially considering you can use it to create deepfakes of <em>any </em>face, human or not. </p>
A physicist creates an AI algorithm that predicts natural events and may prove the simulation hypothesis.
- Princeton physicist Hong Qin creates an AI algorithm that can predict planetary orbits.
- The scientist partially based his work on the hypothesis which believes reality is a simulation.
- The algorithm is being adapted to predict behavior of plasma and can be used on other natural phenomena.
Physicist Hong Qin with images of planetary orbits and computer code.
Credit: Elle Starkman
Are we living in a simulation? | Bill Nye, Joscha Bach, Donald Hoffman | Big Think<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="4dbe18924f2f42eef5669e67f405b52e"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/KDcNVZjaNSU?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span>
How long should one wait until an idea like string theory, seductive as it may be, is deemed unrealistic?
- How far should we defend an idea in the face of contrarian evidence?
- Who decides when it's time to abandon an idea and deem it wrong?
- Science carries within it its seeds from ancient Greece, including certain prejudices of how reality should or shouldn't be.
Plato used the allegory of the cave to explain that what humans see and experience is not the true reality.
Credit: Gothika via Wikimedia Commons CC 4.0<p>When scientists and mathematicians use the term <em>Platonic worldview</em>, that's what they mean in general: The unbound capacity of reason to unlock the secrets of creation, one by one. Einstein, for one, was a believer, preaching the fundamental reasonableness of nature; no weird unexplainable stuff, like a god that plays dice—his tongue-in-cheek critique of the belief that the unpredictability of the quantum world was truly fundamental to nature and not just a shortcoming of our current understanding. Despite his strong belief in such underlying order, Einstein recognized the imperfection of human knowledge: "What I see of Nature is a magnificent structure that we can comprehend only very imperfectly, and that must fill a thinking person with a feeling of humility." (Quoted by Dukas and Hoffmann in <em>Albert Einstein, The Human Side: Glimpses from His Archives</em> (1979), 39.)</p> <p>Einstein embodies the tension between these two clashing worldviews, a tension that is still very much with us today: On the one hand, the Platonic ideology that the fundamental stuff of reality is logical and understandable to the human mind, and, on the other, the acknowledgment that our reasoning has limitations, that our tools have limitations and thus that to reach some sort of final or complete understanding of the material world is nothing but an impossible, <a href="https://www.amazon.com/dp/B01K2JTGIA?tag=bigthink00-20&linkCode=ogi&th=1&psc=1" target="_blank" rel="noopener noreferrer">semi-religious dream</a>.</p>