David Goggins
Former Navy Seal
Career Development
Bryan Cranston
Critical Thinking
Liv Boeree
International Poker Champion
Emotional Intelligence
Amaryllis Fox
Former CIA Clandestine Operative
Chris Hadfield
Retired Canadian Astronaut & Author
from the world's big
Start Learning

The future of the mind: Exploring machine consciousness

What if consciousness is just a blip in the universe, a momentary flowering of experience that is unique to life in early technological civilizations—but eventually vanishes?

Chapter 1: The hard problem of consciousness
Consciousness is the felt quality of experience. So when you see the rich hues of a sunset, or you smell the aroma of your morning coffee, you're having conscious experience. Whenever you're awake and even when you're dreaming, you are conscious. So consciousness is the most immediate aspect of your mental life. It's what makes life wonderful at times, and it's also what makes life so difficult and painful at other times.

No one fully understands why we're conscious. In neuroscience, there's a lot of disagreement about the actual neural basis of consciousness in the brain. In philosophy, there is something called the hard problem of consciousness, which is due to the philosopher David Chalmers. The hard problem of consciousness asks, why must we be conscious? Given that the brain is an information processing engine, why does it need to feel like anything to be us from the inside?

Chapter 2: Are we ready for machines that feel?
SUSAN SCHNEIDER: The hard problem of consciousness is actually something that isn't quite directly the issue we want to get at when we're asking whether machines are conscious. The problem of AI consciousness simply asks, could the AIs that we humans develop one day or even AIs that we can imagine in our mind's eye through thought experiments, could they be conscious beings? Could it feel like something to be them?

The problem of AI consciousness is different from the hard problem of consciousness. In the case of the hard problem, it's a given that we're conscious beings. We're assuming that we're conscious, and we're asking, why must it be the case? The problem of AI consciousness, in contrast, asks whether machines could be conscious at all.

So why should we care about whether artificial intelligence is conscious? Well, given the rapid-fire developments in artificial intelligence, it wouldn't be surprising if within the next 30 to 80 years, we start developing very sophisticated general intelligences. They may not be precisely like humans. They may not be as smart as us. But they may be sentient beings. If they're conscious beings, we need ways of determining whether that's the case. It would be awful if, for example, we sent them to fight our wars, forced them to clean our houses, made them essentially a slave class. We don't want to make that mistake. We want to be sensitive to those issues. So we have to develop ways to determine whether artificial intelligence is conscious or not.

It's also extremely important because as we try to develop general intelligences, we want to understand the overall impact that consciousness has on an intelligent system. Would the spark of consciousness, for instance, make a machine safer and more empathetic? Or would it be adding something like volatility? Would we be, in effect, creating emotional teenagers that can't handle the tasks that we give them? So in order for us to understand whether machines are conscious, we have to be ready to hit the ground running and actually devise tests for conscious machines.

Chapter 3: Playing God: Are all machines created equal?
SUSAN SCHNEIDER: In my book, I talk about the possibility of consciousness engineering. So suppose we figure out ways to devise consciousness in machines. It may be the case that we want to deliberately make sure that certain machines are not conscious. So for example, consider a machine that we would send to dismantle a nuclear reactor. So we'd essentially quite possibly be sending it to its death. Or a machine that we'd send to a war zone. Would we really want to send conscious machines in those circumstances? Would it be ethical?

You might say, well, maybe we can tweak their minds so they enjoy what they're doing or they don't mind sacrifice. But that gets into some really deep-seated engineering issues that are actually ethical in nature that go back to Brave New World, for example, situations where humans were genetically engineered and took a drug called soma, so that they would want to live the lives that they were given. So we have to really think about the right approach. So it may be the case that we deliberately devise machines for certain tasks that are not conscious.

On the other hand, should we actually be capable of making some machines conscious, it may be that humans want conscious AI companions. So, for example, suppose that humans want elder care androids, as is actually under development in Japan today. And as you're looking at the android shop, you're thinking of the kind of android you want to take care of your elderly grandmother, you decide you want a sentient being who would love your grandmother. You feel like that is what best does her justice. And in other cases, maybe humans actually want relationships with AIs. So there could be a demand for conscious AI companions.

Chapter 4: Superintelligence over sentience
SUSAN SCHNEIDER: In Artificial You, I actually offer a 'wait and see' approach to machine consciousness. I urge that we just don't know enough right now about the substrates that could be used to build microchips. We don't even know what the microchips would be that are utilized in 30 to 50 years or even 10 years. So we don't know enough about the substrate. We don't know enough about the architecture of these artificial general intelligences that could be built. We have to investigate all these avenues before we conclude that consciousness is an inevitable byproduct of any sophisticated artificial intelligences that we design.

Further, one concern I have is that consciousness could be outmoded by a sophisticated AI. So consider a super intelligent AI, an AI which, by definition, could outthink humans in every respect: social intelligence, scientific reasoning, and more. A super intelligence would have vast resources at its disposal. It could be a computronium built up from the resources of an entire planet with a database that extends beyond even the reaches of the human World Wide Web. It could be more extensive than the web, even.

So what would be novel to a superintelligence that would require slow conscious processing? The thing about conscious processing in humans is that it's particularly useful when it comes to slow deliberative thinking. So consciousness in humans is associated with slow mental processing, associated with working memory and attention. So there are important limitations on the number of variables, which we can even hold in our minds at a given time. I mean, we're very bad at working memory. We could barely remember a phone number for five minutes before we write it down. That's how bad our working memory systems are.

So if we are using consciousness for these slow, deliberative elements of our mental processing, and a superintelligence, in contrast, is an expert system which has a vast intellectual domain that encompasses the entire World Wide Web and is lightning fast in its processing, why would it need slow, deliberative focus? In short, a superintelligent system might outmode consciousness because it's slow and inefficient. So the most intelligent systems may not be conscious.

Chapter 5: Enter: Post-biological existence
SUSAN SCHNEIDER: Given that a superintelligence may outmode consciousness, we have to think about the role that consciousness plays in the evolution of intelligent life. Right now, NASA and many astrobiologists project that there could be life throughout the universe, and they've identified exoplanets, planets that are hospitable, in principle, to intelligent life. That is extremely exciting. But the origin of life right now is a matter of intense debate in astrophysics. And it may be that all of these habitable planets that we've identified are actually uninhabited.

But on the assumption that there's lots of intelligent life out there, you have to consider that, should these life forms survive their technological maturity, they may actually be turning on their own artificial intelligence devices themselves. And they eventually may upgrade their own brains so that they are cyborgs. They are post-biological beings. Eventually, they may have even their own singularities.

If that's the case, intelligence may go from being biological to post-biological. And as I stress in my project with NASA, these highly sophisticated biological beings may themselves outmode consciousness. Consciousness may be a blip, a momentary flowering of experience in the universe at a point in the history of life where there is an early technological civilization. But then as the civilizations have their own singularity, sadly, consciousness may leave those biological systems.

Chapter 6: The challenge: Maximizing conscious experience
SUSAN SCHNEIDER: That may sound grim, but I bring it up really as a challenge for humans. I believe that understanding how consciousness and intelligence interrelate could lead us to better make decisions about how we enhance our own brain. So on my own view, we should enhance our brains in a way that maximizes sentience, that allows conscious experience to flourish. And we certainly don't want to become expert systems that have no felt quality to experience. So the challenge for a technological civilization is actually to think not just technologically but philosophically, to think about how these enhancements impact our conscious experience.

  • The hard problem of consciousness, as coined by the philosopher David Chalmers, asks: Why must we be conscious? Given that the brain is an information processing engine, why does it need to feel like anything to be us?
  • The problem of AI consciousness is equally complicated. We know humans are conscious, but when it comes to AI, the question is: Could the AIs that we humans develop be conscious beings? Could it feel like something to be them? And how could we possibly know for sure, short of them telling us?
  • How might superintelligence render consciousness extinct? Over 6 chapters in this video, philosopher and cognitive scientist Susan Schneider explores the philosophical problems that underlie the development of AI and the nature of conscious minds.

The “new normal” paradox: What COVID-19 has revealed about higher education

Higher education faces challenges that are unlike any other industry. What path will ASU, and universities like ASU, take in a post-COVID world?

Photo: Luis Robayo/AFP via Getty Images
Sponsored by Charles Koch Foundation
  • Everywhere you turn, the idea that coronavirus has brought on a "new normal" is present and true. But for higher education, COVID-19 exposes a long list of pernicious old problems more than it presents new problems.
  • It was widely known, yet ignored, that digital instruction must be embraced. When combined with traditional, in-person teaching, it can enhance student learning outcomes at scale.
  • COVID-19 has forced institutions to understand that far too many higher education outcomes are determined by a student's family income, and in the context of COVID-19 this means that lower-income students, first-generation students and students of color will be disproportionately afflicted.
Keep reading Show less

The biology of aliens: How much do we know?

Hollywood has created an idea of aliens that doesn't match the science.

  • Ask someone what they think aliens look like and you'll probably get a description heavily informed by films and pop culture. The existence of life beyond our planet has yet to be confirmed, but there are clues as to the biology of extraterrestrials in science.
  • "Don't give them claws," says biologist E.O. Wilson. "Claws are for carnivores and you've got to be an omnivore to be an E.T. There just isn't enough energy available in the next trophic level down to maintain big populations and stable populations that can evolve civilization."
  • In this compilation, Wilson, theoretical physicist Michio Kaku, Bill Nye, and evolutionary biologist Jonathan B. Losos explain why aliens don't look like us and why Hollywood depictions are mostly inaccurate.
Keep reading Show less

Live on Tuesday | Personal finance in the COVID-19 era

Sallie Krawcheck and Bob Kulhan will be talking money, jobs, and how the pandemic will disproportionally affect women's finances.

Dinosaur bone? Meteorite? These men's wedding bands are a real break from boredom.

Manly Bands wanted to improve on mens' wedding bands. Mission accomplished.

Sex & Relationships
  • Manly Bands was founded in 2016 to provide better options and customer service in men's wedding bands.
  • Unique materials include antler, dinosaur bones, meteorite, tungsten, and whiskey barrels.
  • The company donates a portion of profits to charity every month.
Keep reading Show less

How DNA revealed the woolly mammoth's fate – and what it teaches us today

Scientists uncovered the secrets of what drove some of the world's last remaining woolly mammoths to extinction.

Ethan Miller/Getty Images
Surprising Science

Every summer, children on the Alaskan island of St Paul cool down in Lake Hill, a crater lake in an extinct volcano – unaware of the mysteries that lie beneath.

Keep reading Show less