Daniel Dennett
Professor of Philosophy and Co-director of the Center for Cognitive Studies, Tufts University
03:24

Daniel Dennett Investigates Artificial Intelligence

To embed this video, copy this code:

Daniel Dennett with the argument against humanoid robots.

Daniel Dennett

Daniel C. Dennett is the author of Intuition Pumps and Other Tools for Thinking, Breaking the Spell, Freedom Evolves, and Darwin's Dangerous Idea and is University Professor and Austin B. Fletcher Professor of Philosophy, and Co-Director of the Center for Cognitive Studies at Tufts University. He lives with his wife in North Andover, Massachusetts, and has a daughter, a son, and a grandson. He was born in Boston in 1942, the son of a historian by the same name, and received his B.A. in philosophy from Harvard in 1963. He then went to Oxford to work with Gilbert Ryle, under whose supervision he completed the D.Phil. in philosophy in 1965. He taught at U.C. Irvine from 1965 to 1971, when he moved to Tufts, where he has taught ever since, aside from periods visiting at Harvard, Pittsburgh, Oxford, and the École Normale Supérieure in Paris.

His first book, Content and Consciousness, appeared in 1969, followed by Brainstorms (1978), Elbow Room (1984), The Intentional Stance (1987), Consciousness Explained (1991), Darwin's Dangerous Idea (1995), Kinds of Minds (1996), and Brainchildren: A Collection of Essays 1984-1996. Sweet Dreams: Philosophical Obstacles to a Science of Consciousness, was published in 2005. He co-edited The Mind's I with Douglas Hofstadter in 1981 and he is the author of over three hundred scholarly articles on various aspects on the mind, published in journals ranging from Artificial Intelligence and Behavioral and Brain Sciences to Poetics Today and the Journal of Aesthetics and Art Criticism.

Dennett gave the John Locke Lectures at Oxford in 1983, the Gavin David Young Lectures at Adelaide, Australia, in 1985, and the Tanner Lecture at Michigan in 1986, among many others. He has received two Guggenheim Fellowships, a Fulbright Fellowship, and a Fellowship at the Center for Advanced Studies in Behavioral Science. He was elected to the American Academy of Arts and Sciences in 1987.

He was the Co-founder (in 1985) and Co-director of the Curricular Software Studio at Tufts, and has helped to design museum exhibits on computers for the Smithsonian Institution, the Museum of Science in Boston, and the Computer Museum in Boston.

Transcript

Question: Are you an advocate of furthering AI research?

Daniel Dennett:    I think that it’s been a wonderful field and has a great future, and some of the directions are less interesting to me and less important theoretically, I think, than others.  I don’t think it needs a champion.  There’s plenty of drive to pursue this research in different ways. 

What I don’t think it’s going to happen and I don’t think it’s important to try to make it happen; I don’t think we’re going to have a really conscious humanoid agents anytime in the foreseeable future. And I think there’s not only no good reason to try to make such agents, but there’s some pretty good reasons not to try.  Now, that might seem to contradict the fact that I work on a Cog project with MIT, which was of course is an attempt to create a humanoid agent, cogent, cog, and to implement the multiple drafts model of consciousness; my model of consciousness on it. 

We sort of knew we weren’t going to succeed, but we're going to learn a lot about what had to go in there. And that’s what made it interesting; is that we could see by working on an actual project, what’s some of the really most demanding contingencies and requirements and dependencies were. 

It’s proof of concept. You want to see what works but then you don’t have to actually do the whole thing. 

I compare this to; imagine the task of robotics, of designing and building a robotic bird which could fly around and you know, weigh three or four ounces, could fly around the room, could catch flies, and land on a twig. 

Is it possible in principle to make such a robotic bird? I think possible in principle. 

What would it cost? Oh much more than sending people to the moon.  It will dwarf the Manhattan Project. It would be a huge effort and we wouldn’t learn that much.

We can learn by doing the parts, by understanding bird flight and bird navigation, we can do that without ever putting it all together, which would be a colossal expense and not worth it. 

There’s plenty of birds, we don’t need that, we don’t need to make any and we can make quasi birds. In fact, they are making little tiny robots surveillance flying things, they don’t perfectly mimic birds, they don’t have to.  And that’s the way as I should go as well.

Recorded on Mar 6, 2009.

Articles

×