Automation Nightmare: Philosopher Warns We Are Creating a World Without Consciousness
Philosopher and cognitive scientist David Chalmers warns about an AI-dominated future world without consciousness at a recent conference on artificial intelligence that also included Elon Musk, Ray Kurzweil, Sam Harris, Demis Hassabis and others.
Recently, a conference on artificial intelligence, tantalizingly titled “Superintelligence: Science or Fiction?”, was hosted by the Future of Life Institute, which works to promote “optimistic visions of the future”.
The conference offered a range of opinions on the subject from a variety of experts, including Elon Musk of Tesla Motors and SpaceX, futurist Ray Kurzweil, Demis Hassabis of Google's DeepMind, neuroscientist and author Sam Harris, philosopher Nick Bostrom, philosopher and cognitive scientist David Chalmers, Skype co-founder Jaan Tallinn, as well as computer scientists Stuart Russell and Bart Selman. The discussion was led by MIT cosmologist Max Tegmark.
The conversation's topics centered on the future benefits and risks of artificial superintelligence, with everyone generally agreeing that it’s only a matter of time before AI becomes paramount in our lives. Eventually, AI will surpass human intelligence, with the ensuing risks and transformations. And Elon Musk, for one, thinks it’s rather pointless to be concerned as we are already cyborgs, considering all the technological extensions of ourselves that we depend on a daily basis.
A worry for Australian philosopher and cognitive scientist David Chalmers is creating a world devoid of consciousness. He sees the discussion of future superintelligences often presume that eventually AIs will become conscious. But what if that kind of sci-fi possibility that we will create completely artificial humans is not going to come to fruition? Instead, we could be creating a world endowed with artificial intelligence but not actual consciousness.
David Chalmers speaking. Credit: Future of Life Institute.
Here’s how Chalmers describes this vision (starting at 22:27 in Youtube video below):
“For me, that raising the possibility of a massive failure mode in the future, the possibility that we create human or super human level AGI and we've got a whole world populated by super human level AGIs, none of whom is conscious. And that would be a world, could potentially be a world of great intelligence, no consciousness no subjective experience at all. Now, I think many many people, with a wide variety of views, take the view that basically subjective experience or consciousness is required in order to have any meaning or value in your life at all. So therefore, a world without consciousness could not possibly a positive outcome. maybe it wouldn't be a terribly negative outcome, it would just be a 0 outcome, and among the worst possible outcomes.”
Chalmers is known for his work on the philosophy of mind and has delved particularly into the nature of consciousness. He famously formulated the idea of a “hard problem of consciousness” which he describes in his 1995 paper “Facing up to the problem of consciousness” as the question of ”why does the feeling which accompanies awareness of sensory information exist at all?"
His solution to this issue of an AI-run world without consciousness? Create a world of AIs with human-like consciousness:
“I mean, one thing we ought to at least consider doing there is making, given that we don't understand consciousness, we don't have a complete theory of consciousness, maybe we can be most confident about consciousness when it's similar to the case that we know about the best, namely human human consciousness... So, therefore maybe there is an imperative to create human-like AGI in order that we can be maximally confident that there is going to be consciousness,” says Chalmers (starting at 23:51).
By making it our clear goal to fully recreate ourselves in all of our human characteristics, we may be able to avoid a soulless world of machines becoming our destiny. A warning and an objective worth considering while we can. Yet it sounds from Chalmers’s words that as we don’t understand consciousness, perhaps this is a goal doomed to failure.
Please check out the excellent conference in full here:
Robots ready to produce the new Mini Cooper are pictured during a tour of the BMW's plant at Cowley in Oxford, central England, on November 18, 2013. (Photo credit: ANDREW COWIE/AFP/Getty Images)
- Human beings are psychologically hardwired to fear differences
- Several recent studies show evidence that digital spaces exacerbate the psychology which contributes to tribalism
- Shared experiences of awe, such as space travel, or even simple shared meals, have surprising effectives for uniting opposing groups
Maslow's Hierarchy of Needs is updated for the 21st century in a new study.
- Maslow's famous "Hierarchy of Needs" describes different levels of human motivation.
- A new study updates the hierarchy through modern methods.
- The research shows that self-actualized people share 10 specific traits.
There are many ways to posit the fundamental nature of reality.
- After thousands of years, and an infinite amount of novel experiences, there are today many dueling schools of philosophical thought.
- A great philosophical background takes into account a number of metaphysical positions and ideas.
- These 10 philosophy books all take on the questions of existence in a unique and varied manner.
There are two main types of sexual fantasies. One, however, is more destructive than the other.
- There are two main types of sexual fantasies.
- One of them is more harmful to the a relationship or marriage than the other (by a lot).
- Sexually fantasizing about somebody else, though, neither hurts a relationship nor helps it; instead, it has the same mental impact as random daydreaming.
SMARTER FASTER trademarks owned by The Big Think, Inc. All rights reserved.