Skip to content
Who's in the Video
Dr. Michael Wooldridge is a professor of computer science at the University of Oxford. His current research is at the intersection of logic, computational complexity, and game theory. He has[…]
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

In his book A Brief History of AI, Michael Wooldridge, a professor of computer science at the University of Oxford and an AI researcher, explains that AI is not about creating life, but rather about creating machines that can perform tasks requiring intelligence. 

Wooldridge discusses two approaches to AI: symbolic AI and machine learning. Symbolic AI involves coding human knowledge into machines, while machine learning allows machines to learn from examples to perform specific tasks. Progress in AI stalled in the 1970s due to a lack of data and computational power, but recent advancements in technology have led to significant progress. AI can perform narrow tasks better than humans, but the grand dream of AI is achieving artificial general intelligence (AGI), which means creating machines with the same intellectual capabilities as humans. One challenge for AI is giving machines social skills, such as cooperation, coordination, and negotiation. 

The path to conscious machines is slow and complex, and the mystery of human consciousness and self-awareness remains unsolved. The limits of computing are only bounded by imagination.

MICHAEL WOOLDRIDGE: AI is not about trying to create life, right? That's not what it's about, at all. But it's kind of, very much feels like that. I mean, if we ever achieved the ultimate dream of AI, which I call the "Hollywood dream of AI," the kind of thing that we see in Hollywood movies, then we will have created machines that are conscious, potentially, in the same way that human beings are. So it's very like that kind of dream of creating life- and that, in itself, is a very old dream.

It goes back to the ancient Greeks: The Greeks had myths about the blacksmiths to the gods who could create life from metal creatures. In medieval Prague they had the myth of the 'Golem,' which was a creature that was fashioned from clay and brought to life. You know, the dream of creating life from nothing. So, it's a fascinating idea. It's an idea that's been there throughout human history, but it's an idea that we seem to now have the tools to potentially make real.

Hi, my name's Mike Wooldridge. I'm a professor of computer science at the University of Oxford and an AI researcher, and most recently, I'm the author of "A Brief History of AI" out now in Flatiron.

So John McCarthy was an American researcher, and he applied for funding from the Rockefeller Foundation for a summer school at Dartmouth. What he had to do for this funding bid was to give a name for what they wanted to do. And so he picked the term Artificial Intelligence, and it's the name that stuck.

So what McCarthy was working in was a trend in artificial intelligence, which is called 'Symbolic AI.' When we consider what we should do, we kind of have a conversation with ourselves: "I should do this because X and Y and Z, no I shouldn't do it because A and B and so on." And the Symbolic AI is about trying to recreate that kind of reasoning.

How do we approach artificial intelligence? How do we go about doing it? We wanna build a machine that can do some task which requires intelligence in humans, let's say translating French into English. So the Symbolic AI view of this is that what you do is you go and find somebody who's really expert and you find out from them all the knowledge that they use when they translate from French to English, and you code it up in what are computer versions of sentences. And if you do that right, so is the idea, then the machine will have that human expertise. That's the Symbolic AI approach, right, that human intelligent behavior is a problem of knowledge. If you give the machine the right knowledge, it will be able to do the problem.

But there's a different trend. It says, "Look, forget about trying to tell the machine how to do it by giving it the knowledge. Just show the machine what you want it to do, and get the machine to learn." In the French to English translation example, you're not telling it how to do the translation. You're just saying, "Look, for this input, this is what I would want you to produce as the output. For this French input, I would want this English output." And you give it lots of examples like that. And the idea is it will learn how to do it. So that's machine learning, is what that's all about.

And the techniques themselves are not a new thing. Two researchers called McCulloch and Pitts, in the 1940s, came up with this idea for what are now called 'neural networks,' but throughout the 60s and early 70s, really progress stalled. And so there was a backlash against AI in the mid-1970s, and that was called 'The AI Winter.'

It turned out that to make neural networks work, you needed lots and lots of data- but also, these things are computationally very expensive. You need lots of compute power in order to make these neural networks work. And that's the area where we've seen lots of progress over the last 15 years. That's really the reason that we're having this conversation today. That's the reason that AI is such an important field at the moment.

So what most of contemporary AI is about is focused on getting AI systems to do very, very narrow tasks, very, very specific things. And in those specific tasks, it might be better than any living human being, but it can't do anything else. You can drive a car, I can drive a car, I can then get out of the car and play a game of football, rather badly in my case, and then make a good meal and tell a joke, and I can do that- the whole range of things.

You consider a driverless car, however good it is at driving, it's doing one tiny narrow thing. So, the grand dream of AI, it's not kind of formalized anywhere, there's no very specific version of it, but nowadays it goes by the name of 'Artificial General Intelligence,' AGI. And basically what it means if AGI succeeds, if we achieve with that grand dream, then we'll have machines that have the same intellectual capabilities that human beings do- but there's one other fascinating part of the puzzle.

So a colleague of mine here at the University of Oxford called Robin Dunbar, he's an evolutionary psychologist, and he was interested in the following question: Why do human beings have big brains? It's a very natural question. Why do human beings have big brains? What Dunbar became convinced by was the idea that we have big brains because we are social animals, and we have big brains to be able to cope with many social relationships. You know, where I keep track of: 'What Bob thinks about what Alice thinks about Bob, you know,' that kind of thing- how these stand in relation to one another. And what I found about that so fascinating is that it means that human intelligence is, in a fundamental way, social intelligence.

Back in the 1950s when John McCarthy and his contemporaries were thinking about AI, what they wanted to do was to demonstrate that machines could do things like learn and solve problems. And it's only much more recently that AI has become concerned with these social aspects. What happens if you have two AI systems that can start to interact with one another? Then how do we give them social skills, skills like cooperation, the ability to work as a team, to coordinate with each other, to negotiate with each other?

So, how might we get there, to conscious machines? One of the steps along that path is the idea that we will be able to build machines, which can put themselves in another's mind. I think that's a step in the right direction, but the truth is we don't know how to even take that step at the moment.

Human beings are wonderful creations. I mean, they are the most incredible creations in the entire Universe, but there's nothing magic about them. We are a bunch of atoms that are bumping up against each other. For that reason, I don't think there should be any logical reason that says that conscious machines aren't possible. But saying that something is logically possible and saying that we know how to do it are completely different things.

Do we know how to do it? Absolutely not. And actually, one of the fundamental problems is that consciousness itself in human beings is really not remotely understood. It is one of the big mysteries in science. How do that large number of neurons that are connected in all those kind of weird ways create consciousness and self-awareness, the human experience?

So the path ahead I think is gonna be slow and torturous. These are fearsomely complex things that are being created. But, one of the fascinating things, not about AI, but about computing generally, is that the limits to computing: they're not the limits of concrete or steel or anything like that in the physical world. You're really bounded only by what you can imagine.

NARRATOR: Get smarter faster, with videos from the world's biggest thinkers. To learn even more from the world's biggest thinkers, get Big Think+ for your business.


Related