Will robots have rights in the future?
Perhaps sooner than we think, we'll need to examine the moral standing of intelligent machines.
PETER SINGER: If we become capable of developing artificial general intelligence at such a high level that we're convinced we have actually created a conscious being, a being who can not only sort of express desires or wants but actually feels something inside, has experiences, is capable of feeling joy or sorrow or misery. If we get to that point and I certainly don't think we're there yet but we may get there one day. Then there will be a lot of ethical issues because then we will have created beings like us. And the question has to be raised so do they then have rights like us. And I would say well, why not. If they really are conscious and if they're also able to think, understand themselves. If they're self-aware in the way we are then I think we ought to give as much concern and weight to their interests and their wants as we would give to any one of us.
I've argued that throughout history we have expanded the circle of moral concern from initially it just being our own tribe to a nation race and now all human beings. And I've been arguing for expanding beyond just human beings to all sentient creatures, all beings capable of feeling pain, enjoying their life, feeling miserable. And that obviously includes many nonhuman animals. If we get to create robots that are also capable of feeling pain then that will be somewhere else that we have to push the circle of moral concern backwards because I certainly think we would have to include them in our moral concern once we've actually created beings with capacities, desires, wants, enjoyments, miseries that are similar to ours.
Exactly where we would place robots would depend on what capacities we believe they have. I can imagine that we might create robots that are limited to the intelligence level of nonhuman animals, perhaps not the smartest nonhuman animals either. They could still perform routine tasks for us. They could fetch things for us on voice command. That's not very hard to imagine. But I don't think that that would be a sentient being necessarily. And so if it was just a robot that we understood how exactly that worked it's not very far from what we have now. I don't think it would be entitled to any rights or moral status. But if it was at a higher level than that, if we were convinced that it was a conscious being then the kind of moral status it would have would depend on exactly what level of consciousness and what level of awareness. Is it more like a pig, for example. Well, then it should have the same rights as a pig which, by the way, I think we are violating every day on a massive scale by the way we treat pigs in factory farms. So I'm not saying such a robot should be treated like pigs are being treated in our society today.
On the contrary. It should be treated with respect for their desires and awareness and their capacities to feel pain and their social nature. All of those things that we ought to take into account when we are responsible for the lives of pigs. Also, we would have to take into account when we're responsible for the lives of robots at a similar level. But if we created robots who were at our level then I think we would have to give them really the same rights that we have. There would be no justification for saying ah yes, but we're a biological creature and you're a robot. I don't think that has anything to do with the moral status of a being.
- If eventually we develop artificial intelligence sophisticated enough to experience emotions like joy and suffering, should we grant it moral rights just as any other sentient being?
- Theoretical philosopher Peter Singer predicts the ethical issues that could ensue as we expand the circle of moral concern to include these machines.
- A free download of the 10th anniversary edition of The Life You Can Save: How to Do Your Part to End World Poverty is available here.
The Life You Can Save: How to Do Your Part to End World Poverty: 10th Anniversary ed. Edition
- AIs should have the same ethical protections as animals | Aeon Ideas ›
- We have greater moral obligations to robots than to humans - Big ... ›
New study figures out how stars produce gamma ray bursts.
Isogloss cartography shows diversity, richness, and humour of the French language
The best leaders don't project perfection. Peter Fuda explains why.
- There are two kinds of masks leaders wear. Executive coach Peter Fuda likens one to The Phantom of the Opera—projecting perfectionism to hide feelings of inadequacy—and the other to The Mask, where leaders assume a persona of toughness or brashness because they imagine it projects the power needed for the position.
- Both of those masks are motivated by self-protection, rather than learning, growth and contribution. "By the way," says Fuda, "your people know you're imperfect anyway, so when you embrace your imperfections they know you're honest as well."
- The most effective leaders are those who try to perfect their craft rather than try to perfect their image. They inspire a culture of learning and growth, not a culture where people are afraid to ask for help.
To learn more, visit peterfuda.com.