Can a machine be ethical? Why teaching AI ethics is a minefield.

Artificial intelligence will soon be powerful enough to operate autonomously, how should we tell it to act? What kind of ethics should we teach it?

The HAL 9000 computer as seen in 2001: A Space Odyssey.

We are rapidly approaching the day when an autonomous artificial intelligence may have to make ethical decisions of great magnitude without human supervision. The question that we must answer is how it should act when life is on the line.


Helping us make our decision is philosopher James H. Moor, one of the first philosophers to make significant inroads into computer ethics. In his 2009 essay Four Kinds of Ethical Robots, he examines the possible ethical responsibilities machines could have and how we ought to think about it.

Dr. Moor categorizes machines of all kinds into four ethical groups. Each group has different ethical abilities that we need to account for when designing and responding to them.

Ethical impact agents 

 These are devices like watches which could have a positive or negative impact on humans. While a watch is unable to do anything but tell me what time it is, the timepiece could be wrong and therefore cause me to be late.

Implicit ethical agents

These are machines like ATMs that have certain ethical concerns addressed in their very design. ATMs, for example, have safeguards to assure they give out the proper amount of money and are just to both you and the bank.  


Other machines can be implicitly vicious, such as a torture device which is designed to assure maximum pain and is failsafe against comfort. While these machines have distinct ethical features, they are part of the machine’s very being; and not the result of a decision process.

Explicit ethical agents

These are closer to what most of us think of when we think of programmable robots and artificial intelligence. These devices and machines can be “thought of as acting from ethics, not merely according to ethics.”

To use the example of an ATM again, while an ATM has to check your balance before you run off with all of the bank’s money, it doesn’t decide to do that because the programmer gave it an ethical code, it was explicitly told to check.

An explicit ethical agent would be an ATM which was told to always prevent theft and then decided to check your balance before giving you the one million dollars you asked it for so it might reach that end.

Full ethical agents

These are beings which function just like us, including free will and a sense of self. An entirely moral being, biological or not. 

It is safe to say that no machine currently qualifies for this designation, and the bulk of academic and popular discussion focuses on explicit ethical agents. The idea of an entirely ethical device is a fascinating one, however, which is found in works such as 2001: A Space Odyssey.

michael-vassar-ai-will-bring-on-human-exctinction

So, if we have to worry about explicit agents, how should we tell them to act?

A major issue for computer ethics is what kind of algorithms an explicit ethical agent should follow. While many science fiction authors, philosophers, and futurists have proposed sets of rules before, many of them are lacking.

Dr. Moor gives the example of Isaac Asimov’s three rules of robotics. For those who need a refresher, they are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The rules are hierarchical, and the robots in Asimov’s books are all obligated to follow them. 

Dr. Moor suggests that the problems with these rules are obvious. The first rule is so general that an artificial intelligence following them “might be obliged by the First Law to roam the world attempting to prevent harm from befalling human beings” and therefore be useless for its original function!

Such problems can be common in deontological systems, where following good rules can lead to funny results. Asimov himself wrote several stories about potential problems with the laws. Attempts to solve this issue abound, but the challenge of making enough rules to cover all possibilities remains.  

On the other hand, a machine could be programmed to stick to utilitarian calculus when facing an ethical problem. This would be simple to do, as the computer would only have to be given a variable and told to make choices that would maximize the occurrence of it. While human happiness is a common choice, wealth, well-being, or security are also possibilities.

However, we might get what we ask for. The AI might decide to maximize human safety by making all risky technology it has access to stop dead. It could determine that human happiness is highest when all unhappy people are sent into lakes by self-driving cars.

How can we judge machines that make no choices? What would make an ethical machine a good one? 

This is a tricky one. While we do hold people who claim they were “just following orders” as responsible, we do so because we presume they had the free will to do otherwise. With A.I. we lack that ability. Dr. Moor does think we can still judge how well a machine is making a decision, however.

He says that: “In principle, we could gather evidence about a robot’s ethical competence just as we gather evidence about the competence of human decision-makers, by comparing its decisions with those of humans, or else by asking the robot to provide justifications for its decisions.”

While this wouldn’t cover all aspects of ethical decision making, it would be a strong start for a device that can only follow an algorithm.  This element isn’t all bad though, Dr. Moor is somewhat optimistic about the ability of such machines to make hard choices, as they might make difficult decisions “more competently and fairly than humans.”

As artificial intelligence gets smarter and our reliance on technology becomes more pronounced the need for a computer ethics becomes more pressing. If we can’t agree on how humans should act, how will we ever decide on how an intelligent machine should function? We should make up our minds quickly since the progress of AI shows no signs of slowing down. 

American education: It’s colleges, not college students, that are failing

Who is to blame for the U.S.'s dismal college graduation rate? "Radical" educator Dennis Littky has a hunch.

Percentage of college student dropouts by age at enrollment: 2-year and 4-year institutions

Sponsored by Charles Koch Foundation
  • COVID-19 has magnified the challenges that underserved communities face with regard to higher education, such as widening social inequality and sky-high tuition.
  • At College Unbound, where I am president, we get to know students individually to understand what motivates them, so they can build a curriculum based on goals they want to achieve.
  • My teaching mantra: Everything is permitted during COVID-19. Everything is permitted during COVID-19. Everything is permitted during COVID-19.
Keep reading Show less

The mystery of the Bermuda Triangle may finally be solved

Meteorologists propose a stunning new explanation for the mysterious events in the Bermuda Triangle.

Surprising Science

One of life's great mysteries, the Bermuda Triangle might have finally found an explanation. This strange region, that lies in the North Atlantic Ocean between Bermuda, Miami and San Juan, Puerto Rico, has been the presumed cause of dozens and dozens of mind-boggling disappearances of ships and planes.

Keep reading Show less

LIVE AT 2 PM ET | Lead your team toward collaborative problem solving

What does it mean to "lead without authority"?

Big Think LIVE

Add event to calendar

Keep reading Show less

Planet Nine will be discovered in the next decade. Here’s why.

The planet that we are searching for is a little bit smaller and closer than we originally thought.

Planet Nine will be discovered in the next decade. Here’s why. | ...
Videos
  • Years ago, California Institute of Technology professor Konstantin Batygin was inspired to embark on a journey of discovering what lurked beyond Neptune. What he and his collaborator discovered was a strange field of debris.
  • This field of debris exhibited a clustering of orbits, and something was keeping these orbits confined. The only plausible source would be the gravitational pull of an extra planet—Planet Nine.
  • While Planet Nine hasn't been found directly, the pieces of the puzzle are coming together. And Batygin is confident we'll return to a nine-planet solar system within the next decade.
Keep reading Show less
Scroll down to load more…