Skip to content
Who's in the Video
Peter Singer has been described as the world’s most influential philosopher. Born in Melbourne in 1946, he has been professor of bioethics at Princeton University since 1999. His many books[…]

Philosopher Peter Singer explores the impact of AI on animals, using historical parallels in how humans have previously used technology to exploit nature. He discusses AI’s current applications in factory farming and wildlife management, which are already raising ethical concerns.

Singer believes AI should not only serve human interests and that we should consider how it impacts all sentient beings as we continue to develop it.

He also delves into the philosophical question of AI surpassing human intelligence and its potential consciousness and moral status, emphasizing the need for government standards akin to animal welfare regulations. 

PETER SINGER: here are quite a few things that concern me about AI. It clearly has both positive and negative aspects.

PARROT: Okay, Google, what's the weather?

GOOGLE: Right now, in Orlando, it's 86 degrees.

SINGER: There are a lot of factors, and one of those concerns is about the impact of AI on animals. When you look at past technologies, they have always been used to the disadvantage of animals in various ways. We invented the wheel, a great invention, of course, they help us move around, but that means that we've tied horses and oxen and various other animals, effectively enslaving them to pull the carts that we've made with wheels.

Similarly, with AI, we are already using it on animals in a variety of ways. Factory farms are starting to use AI to run factory farms and to remove humans even further from the animals in the factory farms. In New Zealand, there are feral possums that had been imported from Australia for fur, but they're damaging to New Zealand's native forests, which never had possums. The drones are being used to kill them. And in general, when you look at studies of AI ethics, It tends to talk about AI must be used for human benefit. But I don't think that's enough. We share this planet with other species who are capable of feeling pain and whose interests must be counted. So I think the statements of AI ought to instead talk about AI being used for the benefit of all sentient beings.

There are other broader concerns that are somewhat more philosophical. One is about whether AI could become more intelligent than us, a super-intelligent artificial general intelligence. We've already created vast numbers of conscious beings. We're creating animals all the time, and we vary them in their nature by breeding. I don't see any in principle reason why you couldn't get something similar happening in something that isn't a carbon-based life form that is made of silicon chips.

If AI becomes conscious, if we develop an artificial intelligence that is itself a conscious sentient being, how can we tell whether it's mimicking consciousness or whether it's genuinely conscious? And what would its moral status be? Would its moral status be similar to that of humans? Would it be more like animals or would it still be a tool we could use as we pleased? The question is, then will we treat them as the other non-human conscious beings we've created who we have mostly exploited for our particular purposes?

Just as I believe governments should set standards for animal welfare, they should not permit the treatment of animals in the way they're now treated in factory farms. So I would think governments will need to set standards for the treatment of sentient conscious AI.

And then there are reasonable concerns about will we be able to control it? The Oxford philosopher Nick Bostrom has a fable about a group of sparrows who think that it would be terrific if they had an owl to help them with some labor tasks. Owls are much bigger and stronger than they are. And so they think about getting an owl egg and hatching the owl and then training the owl to do what they want. And there's one wise old sparrow who says, well before we actually hatch this egg, shouldn't we make sure that we can train the owl to do what we want? And the other sparrows say, oh, no, it's going to be so wonderful. So let's keep going. The point of the fable, of course, is that owls eat sparrows. And once you have hatched an owl, the sparrows are not gonna be able to control it. So is a super-intelligent AI going to be like the owl would have been to the sparrows? 

NARRATOR: Want to dive deeper? Become a Big Think member and join our members-only community. Watch videos early and unlock full interviews.


Related