Skip to content
Culture & Religion

Is It Ethical to Program Robots to Kill Us?

A new study highlights the new ethical dilemmas caused by the rise of robotic and autonomous technology, like self-driving cars.
Terminator
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

As robots and robotic contraptions like self-driving cars become increasingly ubiquitous in our lives, we are having to address significant ethical issues that arise.

One area of most immediate concern – the moral dilemmas that might be faced by self-driving cars, which are close to coming to the road near you. You can program them with all kinds of safety features, but it’s easy to imagine scenarios when the programmed rules by which a car like that operates would come into conflict with each other.

For example, what if you had a situation when a car would have to choose between hitting a pedestrian or hurting the car’s passengers? Or what if it has to choose between two equally dangerous maneuvers where people would get hurt in each scenario like hitting a bus or a motorcycle driver?

A new study demonstrates that the public is also having a hard time deciding on what choice the car should make in such potential situations. People would prefer to minimize casualties, and would hypothetically rather have the car make the choice to swerve and harm one driver to avoid hitting 10 pedestrians. But the same people would not want to buy and drive such a vehicle. They wouldn’t want their car to not have their safety as the prime directive.


“Most people want to live in in a world where cars will minimize casualties,” says Iyad Rahwan, the co-author of a paper on the study and an associate professor in the MIT Media Lab. “But everybody want their own car to protect them at all costs. If everybody does that, then we would end up in a tragedy… whereby the cars will not minimize casualties”.

Check out this great animation that ponders the questions raised by autonomous cars:

The numbers work out this way – 76% of respondents thought it more moral for a self-driving car to sacrifice one passenger over 10 pedestrians. But if they were to ride in such a car, the percentage dropped by a third. Most of the people also opposed any kind of government regulation over such vehicles, afraid that the government would essentially be choosing who lives and dies in various situations.

The researchers themselves do not have the easy answer. They think that:

“For the time being, there seems to be no easy way to design algorithms that would reconcile moral values and personal self-interest.”

Still, as there is a great potential in self-driving vehicles generally eliminating human error and thus, the amount of car accidents, there is a need to figure this out.

The researchers point out that:

“This is a challenge that should be on the mind of carmakers and regulators alike.”

And the long deliberation might also be counter-productive as it:

“may paradoxically increase casualties by postponing the adoption of a safer technology.”

You can read their paper “The social dilemma of autonomous vehicles” here, in the journal “Science”. Besides Rahwan, the paper is written by Jean-Francois Bonnefon of the Toulouse School of Economics, and Azim Shariff, an assistant professor of psychology at the University of Oregon.

Sci-fi writer Isaac Asimov famously formulated “The Three Laws of Robotics” all the way back in 1942. Their ethical implications still resonate today. The Three Laws are:

1.    A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.   A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3.    A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Perhaps in anticipation of a Skynet/Terminator-style robotic takeover, Asimov later added the fourth law that would supercede all the others: 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

Of course, while we debate such questions and figure out who is going to program them into our robotic helpers, the challenge will become – how do you avoid hackers or the robot itself from changing the code? Who controls the code? The government, the corporation or the individual?

Other social questions will rise with further integration of technology into our lives. For example:

Is it cheating if you sleep with a sex robot?

The ‘True Companion’ sex robot, Roxxxy, on display at the TrueCompanion.com booth at the AVN Adult Entertainment Expo in Las Vegas, Nevada, January 9, 2010. In what is billed as a world first, a life-size robotic girlfriend complete with artificial intelligence and flesh-like synthetic skin was introduced to adoring fans at the AVN Adult Entertainment Expo. (Photo by ROBYN BECK/AFP/Getty Images)

What if are yourself a part-robot, a human with cybernetic implants or robotic enhancements? What are your responsibilities towards an “unaltered” human? Is there a new caste system that will arise based on the scale from human to robot?

Surely, you can come up with more such quandaries. You can be sure to have to ponder more of them as we are already in the future we have envisioned.

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Related

Up Next