We have greater moral obligations to robots than to humans

We're directly responsible for our robots' joy, suffering, thoughtfulness, and creative potential.

Down goes HotBot 4b into the volcano. The year is 2050 or 2150, and artificial intelligence has advanced sufficiently that such robots can be built with human-grade intelligence, creativity and desires. HotBot will now perish on this scientific mission. Does it have rights? In commanding it to go down, have we done something morally wrong?


The moral status of robots is a frequent theme in science fiction, back at least to Isaac Asimov’s robot stories, and the consensus is clear: if someday we manage to create robots that have mental lives similar to ours, with human-like plans, desires and a sense of self, including the capacity for joy and suffering, then those robots deserve moral consideration similar to that accorded to natural human beings. Philosophers and researchers on artificial intelligence who have written about this issue generally agree.

I want to challenge this consensus, but not in the way you might predict. I think that, if we someday create robots with human-like cognitive and emotional capacities, we owe them more moral consideration than we would normally owe to otherwise similar human beings.

Here’s why: we will have been their creators and designers. We are thus directly responsible both for their existence and for their happy or unhappy state. If a robot needlessly suffers or fails to reach its developmental potential, it will be in substantial part because of our failure – a failure in our creation, design or nurturance of it. Our moral relation to robots will more closely resemble the relation that parents have to their children, or that gods have to the beings they create, than the relationship between human strangers.

In a way, this is no more than equality. If I create a situation that puts other people at risk – for example, if I destroy their crops to build an airfield – then I have a moral obligation to compensate them, greater than my obligation to people with whom I have no causal connection. If we create genuinely conscious robots, we are deeply causally connected to them, and so substantially responsible for their welfare. That is the root of our special obligation.

Frankenstein’s monster says to his creator, Victor Frankenstein:

I am thy creature, and I will be even mild and docile to my natural lord and king, if thou wilt also perform thy part, the which thou owest me. Oh, Frankenstein, be not equitable to every other, and trample upon me alone, to whom thy justice, and even thy clemency and affection, is most due. Remember that I am thy creature: I ought to be thy Adam….

We must either only create robots sufficiently simple that we know them not to merit moral consideration – as with all existing robots today – or we ought to bring them into existence only carefully and solicitously.

Alongside this duty to be solicitous comes another, of knowledge – a duty to know which of our creations are genuinely conscious. Which of them have real streams of subjective experience, and are capable of joy and suffering, or of cognitive achievements such as creativity and a sense of self? Without such knowledge, we won’t know what obligations we have to our creations.

Yet how can we acquire the relevant knowledge? How does one distinguish, for instance, between a genuine stream of emotional experience and simulated emotions in an artificial mind? Merely programming a superficial simulation of emotion isn’t enough. If I put a standard computer processor manufactured in 2015 into a toy dinosaur and program it to say ‘Ow!’ when I press its off switch, I haven’t created a robot capable of suffering. But exactly what kind of processing and complexity is necessary to give rise to genuine human-like consciousness? On some views – John Searle’s, for example – consciousness might not be possible in any programmed entity; it might require a structure biologically similar to the human brain. Other views are much more liberal about the conditions sufficient for robot consciousness. The scientific study of consciousness is still in its infancy. The issue remains wide open.

If we continue to develop sophisticated forms of artificial intelligence, we have a moral obligation to improve our understanding of the conditions under which artificial consciousness might genuinely emerge. Otherwise we risk moral catastrophe – either the catastrophe of sacrificing our interests for beings that don’t deserve moral consideration because they experience happiness and suffering only falsely, or the catastrophe of failing to recognise robot suffering, and so unintentionally committing atrocities tantamount to slavery and murder against beings to whom we have an almost parental obligation of care.

We have, then, a direct moral obligation to treat our creations with an acknowledgement of our special responsibility for their joy, suffering, thoughtfulness and creative potential. But we also have an epistemic obligation to learn enough about the material and functional bases of joy, suffering, thoughtfulness and creativity to know when and whether our potential future creations deserve our moral concern.

Eric Schwitzgebel

This article was originally published at Aeon and has been republished under Creative Commons.

LinkedIn meets Tinder in this mindful networking app

Swipe right to make the connections that could change your career.

Getty Images
Sponsored
Swipe right. Match. Meet over coffee or set up a call.

No, we aren't talking about Tinder. Introducing Shapr, a free app that helps people with synergistic professional goals and skill sets easily meet and collaborate.

Keep reading Show less

This prophetic 1997 Jeff Bezos interview explains the genius behind Amazon

Jeff Bezos, the founder of Amazon.com, explains his plan for success.

Technology & Innovation
  • Jeff Bezos had a clear vision for Amazon.com from the start.
  • He saw the innovative potential of the online marketplace.
  • Bezos explains why books, in particular, make for a perfect item to sell on the internet.
Keep reading Show less
Promotional photo of Lena Headey as Cersei Lannister on Game of Thrones
Surprising Science
  • It's commonly thought that the suppression of female sexuality is perpetuated by either men or women.
  • In a new study, researchers used economics games to observe how both genders treat sexually-available women.
  • The results suggests that both sexes punish female promiscuity, though for different reasons and different levels of intensity.
Keep reading Show less

TESS telescope has found eight new planets, six supernovae

It has found several bizarre planets outside of our solar system.

NASA/Kim Shiflett
Surprising Science
  • The Kepler program closed down in August, 2018, after nine and a half years of observing the universe.
  • Picking up where it left off, the Transiting Exoplanet Survey Satellite (TESS) has already found eight planets, three of which scientists are very excited about, and six supernovae.
  • In many ways, TESS is already outperforming Kepler, and researchers expect it to find more than 20,000 exoplanets over its lifespan.
Keep reading Show less