AI Is Evolving on Its Own. Does That Make It Dangerous?
Philosopher Daniel Dennett believes AI should never become conscious — and no, it's not because of the robopocalypse.
Daniel C. Dennett is the author of Intuition Pumps and Other Tools for Thinking, Breaking the Spell, Freedom Evolves, and Darwin's Dangerous Idea and is University Professor and Austin B. Fletcher Professor of Philosophy, and Co-Director of the Center for Cognitive Studies at Tufts University. He lives with his wife in North Andover, Massachusetts, and has a daughter, a son, and a grandson. He was born in Boston in 1942, the son of a historian by the same name, and received his B.A. in philosophy from Harvard in 1963. He then went to Oxford to work with Gilbert Ryle, under whose supervision he completed the D.Phil. in philosophy in 1965. He taught at U.C. Irvine from 1965 to 1971, when he moved to Tufts, where he has taught ever since, aside from periods visiting at Harvard, Pittsburgh, Oxford, and the École Normale Supérieure in Paris.
His first book, Content and Consciousness, appeared in 1969, followed by Brainstorms (1978), Elbow Room (1984), The Intentional Stance (1987), Consciousness Explained (1991), Darwin's Dangerous Idea (1995), Kinds of Minds (1996), and Brainchildren: A Collection of Essays 1984-1996. Sweet Dreams: Philosophical Obstacles to a Science of Consciousness, was published in 2005. He co-edited The Mind's I with Douglas Hofstadter in 1981 and he is the author of over three hundred scholarly articles on various aspects on the mind, published in journals ranging from Artificial Intelligence and Behavioral and Brain Sciences to Poetics Today and the Journal of Aesthetics and Art Criticism.
Dennett gave the John Locke Lectures at Oxford in 1983, the Gavin David Young Lectures at Adelaide, Australia, in 1985, and the Tanner Lecture at Michigan in 1986, among many others. He has received two Guggenheim Fellowships, a Fulbright Fellowship, and a Fellowship at the Center for Advanced Studies in Behavioral Science. He was elected to the American Academy of Arts and Sciences in 1987.
He was the Co-founder (in 1985) and Co-director of the Curricular Software Studio at Tufts, and has helped to design museum exhibits on computers for the Smithsonian Institution, the Museum of Science in Boston, and the Computer Museum in Boston.
Daniel C. Dennett: I think a lot of people just assume that the way to make AIs more intelligent is to make them more human. But I think that's a very dubious assumption.
We're much better off with tools than with colleagues. We can make tools that are smart as the dickens, and use them and understand what their limitations are without giving them ulterior motives, purposes, a drive to exist and to compete and to beat the others. those are features that don't play any crucial role in the competences of artificial intelligence. So for heaven sakes don't bother putting them in.
Leave all that out, and what we have is very smart “thingies” that we can treat like slaves, and it's quite all right to treat them as slaves because they don't have feelings, they're not conscious. You can turn them off; you can tear them apart the same way you can with an automobile and that's the way we should keep it.
Now that we're in the age of intelligent design—lots of intelligent designers around—a lot of them are intelligent enough to realize that Orgel's Second Rule is true: "Evolution is cleverer than you are." That's Francis Crick’s famous quip. And so what they're doing is harnessing evolutionary processes to do the heavy lifting without human help. So we have all these deep learning systems and they come in varieties. There's Bayesian networks and reinforcement learning of various sorts, deep learning neural networks… And what these computer systems have in common is that they are competent without comprehension. Google translate doesn't know what it's talking about when it translates a bit of Turkish into a bit of English. It doesn't have to. It's not as good as the translation that a bilingual can do, but it's good enough for most purposes.
And what's happening in many fields in this new wave of AI is the creation of systems, black boxes, where you know that the probability of getting the right answer is very high; they are extremely good, they're better than human beings at churning through the data and coming up with the right answer. But they don't understand how they do it. Nobody understands in detail how they do it and nobody has to.
So we've created entities, which are as inscrutable to us as a bird or a mammal considered as a collection of cells is includable; there's still a lot we don't understand about what makes them tick.
But these entities instead of being excellent flyers or fish catchers or whatever they're excellent pattern detectors, excellent statistical analysts, and we can use these products, these intellectual products without knowing quite how they're generated but knowing having good responsible reasons for believing that they will generate the truth most of the time.
No existing computer system no matter how good it is at answering questions like Watson on Jeopardy or categorizing pictures, for instance, no such system is conscious today, not close. And although I think it's possible in principle to make a conscious android, a conscious robot, I don't think it's desirable; I don't think there would be great benefits to doing this; and there would be some significant harms and dangers too.
You could at tremendous expense, but you'd have to have in fact quite a revolution in computer design, which would take you right down to the very base of the hardware.
If consciousness is ours to give, should we give it to AI? This is the question on the mind of the very sentient Daniel Dennett. The emerging trend in AI and AGI is to humanize our robot creations: they look ever more like us, emote as we do, and even imitate our flaws through machine learning. None of this makes the AI smarter, only more marketable. Dennett suggests remembering what AIs are: tools and systems built to organize our information and streamline our societies. He has no hesitation in saying that they are slaves built for us, and we can treat them as such because they have no feelings. If we eventually understand consciousness enough to install it into a robot, it would be unwise. It won't make them more intelligent, he says, only more anxious. Daniel Dennett's most recent book is From Bacteria to Bach and Back: The Evolution of Minds.
Recycling is linear, but the economy shouldn't be.
Too often the concept of a circular economy is muddled up with some kind of advanced recycling process that would mean keeping our industrial system as it is and preserving a growing consumption model.
Your microbiome begins in your mouth. Why don't we look there more often?
- Eighty percent of patients who've had heart attacks have gum disease, says Dr. Shahrzad Fattahi.
- Oral health is also implicated in forms of cancer, dementia, canker sores, and more.
- Fattahi says the future of medicine must also focus on saliva, as a whole new field of salivary diagnostics is emerging.
Could your urge to check emails — instead of finishing that major project — be a response to an uncomfortable emotional state?
- It's easy to stumble down a rabbit hole when we consider the action beneficial like checking emails, stock prices, or sports scores.
- However, if these seemingly beneficial actions take the place of something else we intended to do, they're just distractions. And we've been moved to these distraction as a psychological response to discomfort.
- The truth is that distraction comes from within, and time management is just another form of pain management.