Philosophers David Chalmers and Daniel Dennett argue over “philosophical zombies,” created to question the nature of human consciousness.
Zombies are a big part of our pop culture. They are both a cathartic exploration of what it means to be human and a vehicle for social commentary. The word “zombie" comes from Haitian folklore and refers to a corpse animated by witchcraft. Facing a horrid life, 17th-century Haitian slaves, who worked on sugar plantations in the French-owned Louisiana area, often considered suicide but were afraid to be trapped in their bodies, wandering the Earth as soulless shells.
In philosophy, this idea of a hypothetical creature that looks like a regular human but has no conscious experiences is known as a “philosophical zombie" or a “p-zombie".
Why do philosophers need zombies?
The concept is kind of a mind trick. Imagine a being that looks and even talks like a human. It goes through all the normal motions of a human and yet has no consciousness. And you would have no idea that it is not like you.
According to philosophers like David Chalmers, p-zombies are an argument against physicalism - the school of thought that everything that makes us human is ultimately derived from our physical characteristics.
Physicalism is based on the success of science in exploring the physical world. According to physicalists, we are essentially intricate arrangements of atoms. Behaviorists, a subset of physicalists, maintain that even all mental processes - thoughts, desires, etc - are just responses to the behaviors of others.
If a p-zombie that is exactly like us, except for the sense of self and consciousness, is logically conceivable, then this possibility could support dualism, an alternative view that sees the world consisting of not just the physical but also the mental.
David Chalmers, Australian philosopher and cognitive scientist, who currently teaches at NYU, thinks that the p-zombie thought experiment can be used to illustrate the “hard problem" of consciousness - “why do physical processes give rise to conscious experience?"
In other words, since a world of zombies is imaginable, all behaving purely at the physical level, why did evolution produce consciousness in humans?
“If there is a possible world which is just like this one except that it contains zombies, then that seems to imply that the existence of consciousness is a further, nonphysical fact about our world. To put it metaphorically, even after determining the physical facts about our world, God had to "do more work" to ensure that we weren't zombies," says Chalmers.
His argument goes like this:
1. Physicalism says that everything in our world is physical.
2. If physicalism is true, a possible metaphysical world must contain everything our regular physical world contains, including consciousness.
3. But we can conceive of a “zombie world" that's like our world physically except for no one in it has consciousness.
4. Physicalism is then proven false.
Physicalists, of course, beg to differ. They argue that any identical copy of our physical world would contain consciousness by necessity.
Daniel Dennett, a noted physicalist philosopher and expert on BigThink, wrote a refutation of “p-zombies" in his commentary, tellingly titled “The Unimagined Preposterousness of Zombies". In it, he proposes that philosophical zombies are logically incoherent.
“When philosophers claim the zombies are conceivable, they invariably underestimate the task of conception (or imagination), and end up imagining something that violates their own definition," says Dennett.
For Dennett's conception of consciousness and free will, check out this video:
Philosopher and cognitive scientist David Chalmers warns about an AI-dominated future world without consciousness at a recent conference on artificial intelligence that also included Elon Musk, Ray Kurzweil, Sam Harris, Demis Hassabis and others.
Recently, a conference on artificial intelligence, tantalizingly titled “Superintelligence: Science or Fiction?”, was hosted by the Future of Life Institute, which works to promote “optimistic visions of the future”.
The conference offered a range of opinions on the subject from a variety of experts, including Elon Musk of Tesla Motors and SpaceX, futurist Ray Kurzweil, Demis Hassabis of Google's DeepMind, neuroscientist and author Sam Harris, philosopher Nick Bostrom, philosopher and cognitive scientist David Chalmers, Skype co-founder Jaan Tallinn, as well as computer scientists Stuart Russell and Bart Selman. The discussion was led by MIT cosmologist Max Tegmark.
The conversation's topics centered on the future benefits and risks of artificial superintelligence, with everyone generally agreeing that it’s only a matter of time before AI becomes paramount in our lives. Eventually, AI will surpass human intelligence, with the ensuing risks and transformations. And Elon Musk, for one, thinks it’s rather pointless to be concerned as we are already cyborgs, considering all the technological extensions of ourselves that we depend on a daily basis.
A worry for Australian philosopher and cognitive scientist David Chalmers is creating a world devoid of consciousness. He sees the discussion of future superintelligences often presume that eventually AIs will become conscious. But what if that kind of sci-fi possibility that we will create completely artificial humans is not going to come to fruition? Instead, we could be creating a world endowed with artificial intelligence but not actual consciousness.
David Chalmers speaking. Credit: Future of Life Institute.
Here’s how Chalmers describes this vision (starting at 22:27 in Youtube video below):
“For me, that raising the possibility of a massive failure mode in the future, the possibility that we create human or super human level AGI and we've got a whole world populated by super human level AGIs, none of whom is conscious. And that would be a world, could potentially be a world of great intelligence, no consciousness no subjective experience at all. Now, I think many many people, with a wide variety of views, take the view that basically subjective experience or consciousness is required in order to have any meaning or value in your life at all. So therefore, a world without consciousness could not possibly a positive outcome. maybe it wouldn't be a terribly negative outcome, it would just be a 0 outcome, and among the worst possible outcomes.”
Chalmers is known for his work on the philosophy of mind and has delved particularly into the nature of consciousness. He famously formulated the idea of a “hard problem of consciousness” which he describes in his 1995 paper “Facing up to the problem of consciousness” as the question of ”why does the feeling which accompanies awareness of sensory information exist at all?"
His solution to this issue of an AI-run world without consciousness? Create a world of AIs with human-like consciousness:
“I mean, one thing we ought to at least consider doing there is making, given that we don't understand consciousness, we don't have a complete theory of consciousness, maybe we can be most confident about consciousness when it's similar to the case that we know about the best, namely human human consciousness... So, therefore maybe there is an imperative to create human-like AGI in order that we can be maximally confident that there is going to be consciousness,” says Chalmers (starting at 23:51).
By making it our clear goal to fully recreate ourselves in all of our human characteristics, we may be able to avoid a soulless world of machines becoming our destiny. A warning and an objective worth considering while we can. Yet it sounds from Chalmers’s words that as we don’t understand consciousness, perhaps this is a goal doomed to failure.
Please check out the excellent conference in full here:
Robots ready to produce the new Mini Cooper are pictured during a tour of the BMW's plant at Cowley in Oxford, central England, on November 18, 2013. (Photo credit: ANDREW COWIE/AFP/Getty Images)
A recent conference on the future of artificial intelligence features visionary debate between Elon Musk, Ray Kurzweil, Sam Harris, Nick Bostrom, David Chalmers, Jaan Tallinn and others.
A fascinating conference on artificial intelligence was recently hosted by the Future of Life Institute, an organization aimed at promoting “optimistic visions of the future”. The conference “Superintelligence: Science or Fiction?” included such luminaries as Elon Musk of Tesla Motors and SpaceX, futurist Ray Kurzweil, Demis Hassabis of MIT’s DeepMind, neuroscientist and author Sam Harris, philosopher Nick Bostrom, philosopher and cognitive scientist David Chalmers, Skype co-founder Jaan Tallinn, as well as computer scientists Stuart Russell and Bart Selman. The discussion was led by MIT cosmologist Max Tegmark.
The group touched on a number of topics about the future benefits and risks of coming artificial superintelligence, with everyone generally agreeing that it’s only a matter of time before AI becomes ubiquitous in our lives. Eventually, AI will surpass human intelligence, with the risks and transformations that such a seismic event would entail.
Elon Musk has not always been an optimistic voice for AI, warning of its dangers to humanity. But here he sounds more muted about the threat. He sees the AI future as inevitable, with dangers to be mitigated through government regulation, as much as he doesn’t like the idea of them being a “bit of a buzzkill”.
He also brings up an interesting perspective that our fears of the technological changes the future will bring are largely irrelevant. According to Musk, we are already cyborgs by utilizing “machine extensions” of ourselves like phones and computers.
“By far you have more power, more capability, than the President of the United States had 30 years ago. If you have an Internet link you have an article of wisdom, you can communicate to millions of people, you can communicate to the rest of Earth instantly. I mean, these are magical powers that didn’t exist, not that long ago. So everyone is already superhuman, and a cyborg,” says Musk [at 33:56].
He sees humans as information-processing machines that pale in comparison to the powers of a computer. What is necessary, according to Musk, is to create a greater integration between man and machine, specifically altering our brains with technology to make them more computer-like.
“I think the two things that are needed for a future that we would look at and conclude is good, most likely, is, we have to solve that bandwidth constraint with a direct neural interface. I think a high bandwidth interface to the cortex, so that we can have a digital tertiary layer that’s more fully symbiotic with the rest of us. We’ve got the cortex and the limbic system, which seem to work together pretty well - they’ve got good bandwidth, whereas the bandwidth to additional tertiary layer is weak,” explained Musk [at 35:05]
Once we solve that issue, AI will spread everywhere. It’s important to do so because, according to Musk, if only a smaller group would have such capabilities, they would become “dictators” with “dominion over Earth”.
What would a world filled with such cyborgs look like? Visions of Star Trek’s Borg come to mind.
Musk thinks it will be a society full of equals:
“And if we do those things, then it will be tied to our consciousness, tied to our will, tied to the sum of individual human will, and everyone would have it so it would be sort of still a relatively even playing field, in fact, it would be probably more egalitarian than today,” points out Musk [at 36:38].
The whole conference is immensely fascinating and worth watching in full. Check it out here: