Skip to content
Who's in the Video
Peter Singer has been described as the world’s most influential philosopher. Born in Melbourne in 1946, he has been professor of bioethics at Princeton University since 1999. His many books[…]
Sign up for Big Think on Substack
The most surprising and impactful new stories delivered to your inbox every week, for free.

What happens when AI becomes conscious? Philosopher Peter Singer explores the ethical dilemma that could follow the creation of sentient machines. If AI can feel pain or experience pleasure, do we have a moral obligation to protect it? 

Singer argues that governments, scientists, and ethicists must prepare now for the rights and protections conscious AI may require.

PETER SINGER: As we continue to develop AI and particularly, as we develop artificial general intelligence, I think it's quite likely that at some point we will create a conscious being. I don't know when that will happen, but I don't see any reason in principle why if we can have consciousness in a biological, carbon-based life form, like I am, which has developed a brain and neurons, I don't see any, in principle, reason why you couldn't get something similar happening in something that isn't a carbon-based life form that is made of silicon chips. And if we do that, we will have created an artificial conscious being. Of course, we've already created vast numbers of conscious beings. We're creating animals all the time, and we vary them in their nature by breeding. So it's not the first time we've created conscious beings, but it's the first time we've created an artificial conscious being, non-human conscious beings we've created who basically we have mostly exploited for our particular purposes as using them for labor, like horses and oxen, or rearing them for food, like cows and pigs and chickens, and fish now we're rearing in large numbers. So there is a danger that we will, but I'm hopeful that we will realize if we do create conscious AI, that that conscious being has interests, an interest in not feeling pain, an interest in enjoying their life once you have consciousness. And they would be wrong to disregard the interests of that conscious being and simply to treat it as another tool or another slave. Now, it's possible that we'll realize as we get close to creating this conscious being that we would then have to give it rights and would not be able to use it in some way. And we might perhaps halt what we're doing at that point just below the level of consciousness or maybe at a very dull level of consciousness where the AI is not really experiencing pleasure or pain because it's not capable of that. It's just got some sort of more neutral, sort of blah state of consciousness, if you like. That's possible. And then maybe it wouldn't really have interests we have to worry about. But I certainly think that if we did create beings that were more like non-human animals, we ought to treat them much better than we now treat non-human animals. And, of course, I also think we ought to change the treatment of non-human animals in order to do that. The question of of how we treat sentient AI is going to be one both for everybody as individuals, just as the question of how we treat animals is one for people as individuals. Are we going to want to have a companion animal, a dog or a cat, and treat them well? Are we going to buy products from factory farms, which means we're supporting cruelty? Those are individual questions, but there are also national government policy issues. And just as I believe governments should set standards for animal welfare, they should not permit the treatment of animals in the way they're now treated in factory farms. So I would think governments will need to set standards for the treatment of sentient, conscious AI once we get to that point. And you know, it'll be a novel question. Maybe they'll set up committees of experts. I can imagine committees, which consist of people who are expert in the AI, and in the nature of the AI, experts knowledgeable about consciousness itself in those sciences, neurosciences relevant to that, but also to have philosophers along, and lawyers perhaps to help crafting policies, and frameworks for this. So I think it will be an interesting task for who knows when. Maybe the middle of the 21st century, there might be committees doing this.


Related