Is It Ethical to Program Robots to Kill Us?

A new study highlights the new ethical dilemmas caused by the rise of robotic and autonomous technology, like self-driving cars.

As robots and robotic contraptions like self-driving cars become increasingly ubiquitous in our lives, we are having to address significant ethical issues that arise.

One area of most immediate concern - the moral dilemmas that might be faced by self-driving cars, which are close to coming to the road near you. You can program them with all kinds of safety features, but it's easy to imagine scenarios when the programmed rules by which a car like that operates would come into conflict with each other.

For example, what if you had a situation when a car would have to choose between hitting a pedestrian or hurting the car's passengers? Or what if it has to choose between two equally dangerous maneuvers where people would get hurt in each scenario like hitting a bus or a motorcycle driver?

A new study demonstrates that the public is also having a hard time deciding on what choice the car should make in such potential situations. People would prefer to minimize casualties, and would hypothetically rather have the car make the choice to swerve and harm one driver to avoid hitting 10 pedestrians. But the same people would not want to buy and drive such a vehicle. They wouldn't want their car to not have their safety as the prime directive.


"Most people want to live in in a world where cars will minimize casualties," says Iyad Rahwan, the co-author of a paper on the study and an associate professor in the MIT Media Lab. "But everybody want their own car to protect them at all costs. If everybody does that, then we would end up in a tragedy... whereby the cars will not minimize casualties".

Check out this great animation that ponders the questions raised by autonomous cars:

The numbers work out this way - 76% of respondents thought it more moral for a self-driving car to sacrifice one passenger over 10 pedestrians. But if they were to ride in such a car, the percentage dropped by a third. Most of the people also opposed any kind of government regulation over such vehicles, afraid that the government would essentially be choosing who lives and dies in various situations.

The researchers themselves do not have the easy answer. They think that:

"For the time being, there seems to be no easy way to design algorithms that would reconcile moral values and personal self-interest."

Still, as there is a great potential in self-driving vehicles generally eliminating human error and thus, the amount of car accidents, there is a need to figure this out.

The researchers point out that:

"This is a challenge that should be on the mind of carmakers and regulators alike."

And the long deliberation might also be counter-productive as it:

"may paradoxically increase casualties by postponing the adoption of a safer technology."

You can read their paper "The social dilemma of autonomous vehicles" here, in the journal "Science". Besides Rahwan, the paper is written by Jean-Francois Bonnefon of the Toulouse School of Economics, and Azim Shariff, an assistant professor of psychology at the University of Oregon.

Sci-fi writer Isaac Asimov famously formulated “The Three Laws of Robotics” all the way back in 1942. Their ethical implications still resonate today. The Three Laws are:

1.    A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.   A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3.    A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Perhaps in anticipation of a Skynet/Terminator-style robotic takeover, Asimov later added the fourth law that would supercede all the others: 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

Of course, while we debate such questions and figure out who is going to program them into our robotic helpers, the challenge will become - how do you avoid hackers or the robot itself from changing the code? Who controls the code? The government, the corporation or the individual?

Other social questions will rise with further integration of technology into our lives. For example:

Is it cheating if you sleep with a sex robot?

The 'True Companion' sex robot, Roxxxy, on display at the TrueCompanion.com booth at the AVN Adult Entertainment Expo in Las Vegas, Nevada, January 9, 2010. In what is billed as a world first, a life-size robotic girlfriend complete with artificial intelligence and flesh-like synthetic skin was introduced to adoring fans at the AVN Adult Entertainment Expo. (Photo by ROBYN BECK/AFP/Getty Images)

What if are yourself a part-robot, a human with cybernetic implants or robotic enhancements? What are your responsibilities towards an “unaltered” human? Is there a new caste system that will arise based on the scale from human to robot?

Surely, you can come up with more such quandaries. You can be sure to have to ponder more of them as we are already in the future we have envisioned.

Big Think Edge
  • The meaning of the word 'confidence' seems obvious. But it's not the same as self-esteem.
  • Confidence isn't just a feeling on your inside. It comes from taking action in the world.
  • Join Big Think Edge today and learn how to achieve more confidence when and where it really matters.
Videos
  • Prejudice is typically perpetrated against 'the other', i.e. a group outside our own.
  • But ageism is prejudice against ourselves — at least, the people we will (hopefully!) become.
  • Different generations needs to cooperate now more than ever to solve global problems.


Yale scientists restore brain function to 32 clinically dead pigs

Researchers hope the technology will further our understanding of the brain, but lawmakers may not be ready for the ethical challenges.

Still from John Stephenson's 1999 rendition of Animal Farm.
Surprising Science
  • Researchers at the Yale School of Medicine successfully restored some functions to pig brains that had been dead for hours.
  • They hope the technology will advance our understanding of the brain, potentially developing new treatments for debilitating diseases and disorders.
  • The research raises many ethical questions and puts to the test our current understanding of death.

The image of an undead brain coming back to live again is the stuff of science fiction. Not just any science fiction, specifically B-grade sci fi. What instantly springs to mind is the black-and-white horrors of films like Fiend Without a Face. Bad acting. Plastic monstrosities. Visible strings. And a spinal cord that, for some reason, is also a tentacle?

But like any good science fiction, it's only a matter of time before some manner of it seeps into our reality. This week's Nature published the findings of researchers who managed to restore function to pigs' brains that were clinically dead. At least, what we once thought of as dead.

What's dead may never die, it seems

The researchers did not hail from House Greyjoy — "What is dead may never die" — but came largely from the Yale School of Medicine. They connected 32 pig brains to a system called BrainEx. BrainEx is an artificial perfusion system — that is, a system that takes over the functions normally regulated by the organ. The pigs had been killed four hours earlier at a U.S. Department of Agriculture slaughterhouse; their brains completely removed from the skulls.

BrainEx pumped an experiment solution into the brain that essentially mimic blood flow. It brought oxygen and nutrients to the tissues, giving brain cells the resources to begin many normal functions. The cells began consuming and metabolizing sugars. The brains' immune systems kicked in. Neuron samples could carry an electrical signal. Some brain cells even responded to drugs.

The researchers have managed to keep some brains alive for up to 36 hours, and currently do not know if BrainEx can have sustained the brains longer. "It is conceivable we are just preventing the inevitable, and the brain won't be able to recover," said Nenad Sestan, Yale neuroscientist and the lead researcher.

As a control, other brains received either a fake solution or no solution at all. None revived brain activity and deteriorated as normal.

The researchers hope the technology can enhance our ability to study the brain and its cellular functions. One of the main avenues of such studies would be brain disorders and diseases. This could point the way to developing new of treatments for the likes of brain injuries, Alzheimer's, Huntington's, and neurodegenerative conditions.

"This is an extraordinary and very promising breakthrough for neuroscience. It immediately offers a much better model for studying the human brain, which is extraordinarily important, given the vast amount of human suffering from diseases of the mind [and] brain," Nita Farahany, the bioethicists at the Duke University School of Law who wrote the study's commentary, told National Geographic.

An ethical gray matter

Before anyone gets an Island of Dr. Moreau vibe, it's worth noting that the brains did not approach neural activity anywhere near consciousness.

The BrainEx solution contained chemicals that prevented neurons from firing. To be extra cautious, the researchers also monitored the brains for any such activity and were prepared to administer an anesthetic should they have seen signs of consciousness.

Even so, the research signals a massive debate to come regarding medical ethics and our definition of death.

Most countries define death, clinically speaking, as the irreversible loss of brain or circulatory function. This definition was already at odds with some folk- and value-centric understandings, but where do we go if it becomes possible to reverse clinical death with artificial perfusion?

"This is wild," Jonathan Moreno, a bioethicist at the University of Pennsylvania, told the New York Times. "If ever there was an issue that merited big public deliberation on the ethics of science and medicine, this is one."

One possible consequence involves organ donations. Some European countries require emergency responders to use a process that preserves organs when they cannot resuscitate a person. They continue to pump blood throughout the body, but use a "thoracic aortic occlusion balloon" to prevent that blood from reaching the brain.

The system is already controversial because it raises concerns about what caused the patient's death. But what happens when brain death becomes readily reversible? Stuart Younger, a bioethicist at Case Western Reserve University, told Nature that if BrainEx were to become widely available, it could shrink the pool of eligible donors.

"There's a potential conflict here between the interests of potential donors — who might not even be donors — and people who are waiting for organs," he said.

It will be a while before such experiments go anywhere near human subjects. A more immediate ethical question relates to how such experiments harm animal subjects.

Ethical review boards evaluate research protocols and can reject any that causes undue pain, suffering, or distress. Since dead animals feel no pain, suffer no trauma, they are typically approved as subjects. But how do such boards make a judgement regarding the suffering of a "cellularly active" brain? The distress of a partially alive brain?

The dilemma is unprecedented.

Setting new boundaries

Another science fiction story that comes to mind when discussing this story is, of course, Frankenstein. As Farahany told National Geographic: "It is definitely has [sic] a good science-fiction element to it, and it is restoring cellular function where we previously thought impossible. But to have Frankenstein, you need some degree of consciousness, some 'there' there. [The researchers] did not recover any form of consciousness in this study, and it is still unclear if we ever could. But we are one step closer to that possibility."

She's right. The researchers undertook their research for the betterment of humanity, and we may one day reap some unimaginable medical benefits from it. The ethical questions, however, remain as unsettling as the stories they remind us of.

Scientists see 'rarest event ever recorded' in search for dark matter

The team caught a glimpse of a process that takes 18,000,000,000,000,000,000,000 years.

Image source: Pixabay
Surprising Science
  • In Italy, a team of scientists is using a highly sophisticated detector to hunt for dark matter.
  • The team observed an ultra-rare particle interaction that reveals the half-life of a xenon-124 atom to be 18 sextillion years.
  • The half-life of a process is how long it takes for half of the radioactive nuclei present in a sample to decay.
Keep reading Show less