Skip to content
Who's in the Video
Ben Goertzel is CEO and chief scientist at SingularityNET, a project dedicated to creating benevolent decentralized artificial general intelligence. He is also chief scientist of financial prediction firm Aidyia Holdings[…]
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Artificial intelligence has the capability to far surpass our intelligence in a relatively short period of time. But AI expert Ben Goertzel knows that the foundation has to be strong for that artificial brain power to grow exponentially. It’s all good to be super-intelligent, he argues, but if you don’t have rationality and empathy to match it the results will be wasted and we could just end up with an incredible number-cruncher. In this illuminating chat, he makes the case for thinking bigger. Ben Goertzel’s most recent book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.

Ben Goertzel: If you think much about physics and cognition and intelligence it’s pretty obvious the human mind is not the smartest possible general intelligence any more than humans are the highest jumpers or the fastest runners. We’re not going to be the smartest thinkers.

If you are going to work toward AGI rather than focusing on some narrow application there’s a number of different approaches that you might take. And I’ve spent some time just surveying the AGI field as a whole and organizing an annual conference on the AGI. And then I’ve spent a bunch more time on the specific AGI approach which is based on the OpenCog, open source software platform. In the big picture one way to approach AGI is to try to emulate the human brain at some level of precision. And this is the approach I see, for example, Google Deep Mind is taking. They’ve taken deep neural networks which in their common form are mostly a model of visual and auditory processing in the human brain. And now in their recent work such as the DNC, differential neural computer, they’re taking these deep networks that model visual or auditory processing and they’re coupling that with a memory matrix which models some aspect of what the hippocampus does, which is the part of the brain that deals with working memory, short-term memory among other things. So this illustrates an approach where you take neural networks emulating different parts of the brain and maybe you take more and more neural networks emulating different parts of the human brain. You try to get them to all work together not necessarily doing computational neuroscience but trying to emulate the way different parts of the brain are doing processing and the way they’re talking to each other.

A totally different approach is being taken by a guy named Marcus Hutter in Australia National University. He wrote a beautiful book on universal AI in which he showed how to write a superhuman infinitely intelligence thinking machine in like 50 lines of code. The problem is it would take more computing power than there is in the entire universe to run. So it’s not practically useful but they’re then trying to scale down from this theoretical AGI to find something that will really work.

Now the approach we’re taking in the OpenCog project is different than either of those. We’re attempting to emulate at a very high level the way the human mind seems to work as an embodied social generally intelligent agent which is coming to grips with hard problems in the context of coming to grips with itself and its life in the world. We’re not trying to model the way the brain works at the level of neurons or neural networks. We’re looking at the human mind more from a high-level cognitive point of view. What kinds of memory are there? Well, there’s semantic memory about abstract knowledge or concrete facts. There’s episodic memory of our autobiographical history. There’s sensory-motor memory. There’s associative memory of things that have been related to us in our lives. There’s procedural memory of how to do things.

And we then look at the different kinds of learning and reasoning the human mind can do. We can do logical deduction sometimes. We’re not always good at it. We make emotional intuitive leaps and strange creative combinations of things. We learn by trial and error and habit. We learn socially by imitating, mirroring, emulating or opposing others. These different kinds of memory and learning that the human mind has – one can attempt to achieve each of those with a cutting-edge computer science algorithm, rather than trying to achieve each of those functions and structures in the way the brain does.

So what we have in OpenCog we have a central knowledge repository which is very dynamic and lives in RAM on a large network of computers which we call the AtomSpace. And for the mathematicians or computer science in the audience, the AtomSpace is what you’d call a weighted labeled hypergraph. So it has nodes. It has links. A link can go between two nodes or a link could go between three, four, five or 50 nodes. Different nodes and links have different types and the nodes and links can have numbers attached to them. A node or link could have a weight indicating a probability or a confidence. It could have a weight indicating how important it is to the system right now or how important it is in the long term so it should be kept around in the system’s memory.

On this AtomSpace, this weighted labeled hypergraph, we can have a lot of different AI processes working together cooperatively. So the AtomSpace, the memory store, is what we would call neural-symbolic. That means we can represent nodes and links that are like neurons in the brain which is fairly low level. But we can also represent nodes and links that are higher level representing pieces of symbolic logic expressions. So we can do explicit logical reasoning which is pretty abstract and low level neural net stuff in the same hypergraph, the same AtomSpace. Acting on this AtomSpace we have deep neural networks for visual and auditory perception. We have a probabilistic logic engine which does abstract reasoning. We have an evolutionary learning algorithm that uses genetic algorithm type methods to try to evolve radical new ideas and concepts and look for data patterns. And we have a neural net type dynamic that spreads activity and importance throughout the network. A few other algorithms. A pattern mining algorithm that just scans through the whole AtomSpace looking for surprising stuff. And the trick is all these different cognitive algorithms have to work together cooperatively to help each other rather than hurt each other.

See, the bottleneck in essentially every AI approach ever taken – be it a neural net, a logic engine, a genetic algorithm, whatever – the bottleneck in every AI approach ever taken has been what we call a combinatorial explosion. And what that means is you have a lot of data items. You have a lot of perceptions coming into your eye or you have a lot of possible moves on the chess board or a lot of possible ways to move the wheel of the car. And there are so many combinations of possible data items and possible things you could do, sifting through all those combinations becomes an exponential problem. I mean if you have a thousand things there’s two to the one-thousandth way to combine them and that’s way too many. So how to sift through combinatorial explosions is the core problem everyone has to deal with. In a deep neural network as currently pursued, it’s solved by making the network have a very specific structure which reflects a structure of visual and auditory streams. And in a logic engine, you don’t have that sort of luxury because a logic engine has to deal with anything, not just sensory data. But what we do in OpenCog is we’ve worked out a system where each of the cognitive processes can help the other one out when it gets stuck in some combinatorial explosion problem. So if a deep neural network trying to perceive things gets confused because it’s dark or it’s looking at something it never saw before, well maybe the reasoning engine can come in and do some inference to cut through that confusion.

If logical reasoning is getting confused and doesn’t know what step to take next because there’s just so many possibilities out there and not much information about them. Well, maybe you fish into your sensory-motor memory and you use deep learning to visualize something you saw before and that gives you a clue of how to pare through the many possibilities that the logic engine is seeing. Now you can model this kind of cognitive synergy mathematically using a branch of mathematics called category theory, which is something I’ve been working on lately. But what’s really interesting more so is to build a system that manifests this and achieves general intelligence as a result and that’s what we’re doing in the OpenCog project.

We’re not there yet to general intelligence but we’re getting there step by step. We’re using our open source, OpenCog platform to control David Hanson’s beautiful, incredibly realistic humanoid robots like the Sophia robot which has gotten a lot of media attention in the last year. We’re using OpenCog to analyze biological data related to the genetics of longevity and we’re doing a host of other consulting projects using this. So we’re proceeding on an R&D track and an application track at the same time. But our end goal with the system is to use cognitive synergy on our neural-symbolic knowledge store to achieve initially human level AI but that’s just an early stage goal. And then AI much beyond the human level.

And that is another advantage of taking an approach that doesn’t adhere slavishly to the human brain. The brain is pretty good at recognizing faces because millions of years of evolution went into that part of the brain. But for doing science or math or logical reasoning or strategic planning we’re pretty bad. And these are things that we’ve started doing only recently in evolutionary time as a result of modern culture. So I think actually OpenCog and other AI systems have potential to be far better than human beings at the sort of logical and strategic side of things. And I think that’s quite important because if you take a human being and upgrade them to like 10,000 IQ the outcome might not be what you want, because you’ve got a motivational system and an emotional system that basically evolved in prehuman animals. Whereas if you architect a system where rationality and empathy play a deeper role in the architecture then as its intelligence ramps way up we may find a more beneficial outcome.


Related