Skip to content
Thinking

Killer robots and the banality of evil

“Lethal autonomous weapon” sounds friendlier than “killer robot.”
Credit: Annelisa Leinbach, Josh / Adobe Stock
Key Takeaways
  • We often accept that certain people — soldiers, spies, and law enforcement, for example — have to kill in the interest of a greater good. In other words, they have a “license to kill.”
  • We are developing technologically capable machines that can autonomously select and engage targets. They can do so with less risk to human operators.
  • The moral problem with these autonomous weapons is that they dehumanize the victim. It echoes a salient point Hannah Arendt made when watching the trial of the Holocaust-enabling Adolf Eichmann.
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

In Ian Fleming’s world of James Bond, Agent 007 has a “license to kill.” What this means is that Bond has the right to make a decision whether to use lethal force to accomplish a greater good. But humans are emotional and fallible. We’re error prone and biased. That begs the question: If a “license to kill” is a necessity for law enforcement, should it be given to a robot instead?

This is no longer a theoretical concern. We now live in a world where warfare is conducted more and more by technology, from long-distance missiles to unmanned drones. On our sister site, Freethink, we examined the issues surrounding modern-day “robot wars” — that is, using killer robots in a conflict zone. If we let soldiers and spies kill for the “greater good,” then why not extend that privilege to robots?

But, let’s make the issue a bit more personal. Should your local police department be able to use killer robots in your neighborhood?

To Protect and Serve 2.0

“Killer robots” have a more formal name: “lethal autonomous weapons” (LAWs). They have been in the news quite a lot, recently. In November, the San Francisco Police Department petitioned the city’s legislators to allow police to use robots that can kill. The SFPD were keen to use robots “when the risk of loss of life to members of the public or officers are imminent and outweigh any other force option available to SFPD.” In other words, they want to use LAWs when they are the best thing in a bad situation.

For police to use lethal robots is not without precedent. In 2016, the Dallas police force had a robot carrying explosives kill a gunman who had already killed five officers. Oakland police have had a robot armed with a shotgun securely disarm a bomb, and they raised the idea that the same robot could be equipped with live ammunition (although they’ve since walked that back).

Initially, the SFPD’s request was granted, but it took only a week of pressure from protestors and civil liberties groups for the decision to be reversed. Dean Preston, one of the city’s legislators who had objected to it from the start, said, “The people of San Francisco have spoken loud and clear: there is no place for killer police robots in our city. We should be working on ways to decrease the use of force by local law enforcement, not giving them new tools to kill people.”

The moral question

Who’s right in this debate? If a robot, responsibly programmed and properly regulated, could protect or save the lives of civilians, why shouldn’t we be allowed to use them? There are two important moral differences between a human’s “license to kill” and an AI’s.

The first concerns to what extent computers can make complex ethical choices in a battlefield or law enforcement situation. Almost any complex event involving firearms or weapons will involve an element of “collateral damage” — a euphemism for “civilian deaths.” Yet, a human agent also can show moral discretion. They might abandon a mission, for instance, if it places too high a risk on children. Or, an agent could change tactics if they deem a target inappropriate or mistaken. A police officer with a gun has a degree of choice that a robot does not — a robot follows only orders. Many LAWs, when they’re no longer communicating with their human operators, cannot show discretion. They cannot make a moral decision.

The second issue, however, is about not only the sanctity of life but the dignity of death. As the campaign, Stop Killer Robots, puts it, “Machines don’t see us as people, but just another piece of code to be processed and sorted.” Hannah Arendt, as she watched the Holocaust-enabling Adolf Eichmann on trial, believed his evil was amplified by how detached he was from his job. He had orders to follow and quotas to meet. He saw spreadsheets, not humans. As Arendt put it:

“Eichmann was not Iago and not Macbeth. Except for an extraordinary diligence in looking out for his personal advancement, he had no motives at all… he never realized what he was doing… It was sheer thoughtlessness — something by no means identical with stupidity — that predisposed him to become one of the greatest criminals of that period… such remoteness from reality and such thoughtlessness can wreak more havoc than all the evil instincts taken together.” 

In reading this, it’s not too hard to see the robotic aspect to Eichmann — an inhuman, calculating view of life. Having drones or robots kill people is no more evil than a bullet or a spear. Having AI decide or identify who to kill, is. LAWs fail to appreciate humans as dignified and worthy of life, so it’s hard to imagine they can appreciate humans at all. In short, killer robots are the ultimate manifestation of Arendt’s famous expression “the banality of evil.”

Jonny Thomson teaches philosophy in Oxford. He runs a popular account called Mini Philosophy and his first book is Mini Philosophy: A Small Book of Big Ideas.

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Related

Up Next