If Your Robot Commits Murder, Should You Go to Jail?

Self-driving cars aren't the only emerging technology facing major questions about ethics and accountability.

Jerry Kaplan: There’s a whole other set of issues about how robots should be treated under the law. Now the obvious knee-jerk reaction is well you own a robot and you’re responsible for everything that it does. But as these devices become much more autonomous, it’s not at all clear that that’s really the right answer or a good answer. You go out and you buy a great new robot and you send it down the street to go pick you up a Frappuccino down at Starbucks and maybe it’s accidental, but it’s standing at the corner and it happens to bump some kid into traffic and a car runs the kid over. The police come and they’re going to come and arrest you for this action. Do you really feel that you’re as responsible as you would be if you had gone like this and pushed that kid into traffic? I would argue no you don’t. So we’re going to need new kinds of laws that deal with the consequences of well-intentioned autonomous actions that robots take. Now interestingly enough, there’s a number of historical precedents for this. You might say well how can you hold a robot responsible for its behavior? You really can actually and let me point out a couple of things.

The first is most people don’t realize it. Corporations can commit criminal acts independent of the people in the corporation. So in the Deepwater Horizon Gulf coast accident, as an example, BP oil was charged with criminal violations even though people in the corporation were not necessarily charged with those same criminal violations. And rightfully so. So how do we punish a corporation? We punish a corporation by interfering with its ability to achieve its stated goal; make huge fines as they did in that particular case. You can make the company go out of business. You can revoke its license to operate, which is a death penalty for a corporation. You can have it monitored as they do in antitrust cases in many companies. IBM, Microsoft, I think, have monitors to make sure they’re abiding by certain kinds of behavioral standards. Well that same kind of activity can apply to a robot. You don’t have to put a robot in jail, but you can interfere with what it’s trying to do. And if these robots are adaptable, logical, and are learning. They’ll say well I’ll get it, you know. I can’t do that because my goal is to accomplish something in particular and if I take this particular action, that’s actually going to be working against my interest in accomplishing that situation.

So rehabilitation and modification of robot behavior, just as with a corporation is much more logical than you might think. Now another interesting historical precedent is prior to the Civil War there were a separate set of laws that applied to slaves. They were called the slave codes. And slaves were property. But interestingly enough, the slave owners were only held liable under certain conditions for the actions of their slaves. The slaves themselves were punished under — if they committed crimes. And so we have a historical precedent for the kinds of ways in which we can sort this out so that you are not in constant fear that your robot is going to bump into somebody and you’re going to go to jail for 20 years for negligent homicide or whatever it might be.

Just like automated vehicles, robots and advanced AI will require new sets of laws to define the extent of owner liability and accountability. Creating these laws will require an important ethical discussion: Who is at fault when a robot misbehaves? According to author Jerry Kaplan, there is a precedent for creating codes and consequences for robots that do not apply to others. Take, for example, the fact that criminal charges can be brought against corporations rather than the people operating beneath the corporate shell. Similarly, we can develop laws that would allow robots and their programming to stand trial.

Why the singular “They” is Merriam-Webster's word of the year

"They" has taken on a not-so-new meaning lately. This earned it the scrutiny it needed to win.

Pixabay by pexels
Politics & Current Affairs
  • Merriam-Webster has announced "they" as the word of the year.
  • The selection was based on a marked increase in traffic to the online dictionary page.
  • Runners up included "quid pro quo" and "crawdad."
Keep reading Show less

'The West' is, in fact, the world's biggest gated community

A review of the global "wall" that divides rich from poor.

Image: TD Architects
Strange Maps
  • Trump's border wall is only one puzzle piece of a global picture.
  • Similar anxieties are raising similar border defenses elsewhere.
  • This map shows how, as a result, "the West" is in fact one large gated community.
Keep reading Show less

Public health crisis: Facebook ads misinform about HIV prevention drug

Facebook's misinformation isn't just a threat to democracy. It's endangering lives.

Photo Credit: Paul Butler / Flickr
Politics & Current Affairs
  • Facebook and Instagram users have been inundated with misleading ads about medication that prevents the transmission of HIV (PrEP), such as Truvada.
  • Over the years, Facebook's hands-off ad policy has faced scrutiny when it comes to false or ambiguous information in its political ads.
  • Unregulated "surveillance capitalism" commodifies people's personal information and makes them vulnerable to sometimes misleading ads.

LGBT groups are saying that Facebook is endangering lives by advertising misleading medical information pertaining to HIV patients.

The tech giant's laissez-faire ad policy has already been accused of threatening democracy by providing a platform for false political ads, and now policy could be fostering a major public-health concern.

LGBT groups take on Facebook’s ad policy

According to LGBT advocates, for the past six months Facebook and Instagram users have been inundated with misleading ads about medication that prevents the transmission of HIV (PrEP), such as Truvada. The ads, which The Washington Post reports appear to have been purchased by personal-injury lawyers, claim that these medications threaten patients with serious side effects. According to LGBT organizations led by GLAAD, the ads have left some patients who are potentially at risk of contracting HIV scared to take preventative drugs, even though health officials and federal regulators say the drugs are safe.

LGBT groups like GLAAD, which regularly advises Facebook on LGBT issues, reached out to the company to have the ads taken down, saying they are false. Yet, the tech titan has refused to remove the content claiming that the ads fall within the parameters of its policy. Facebook spokeswoman Devon Kearns told The Post that the ads had not been rated false by independent fact-checkers, which include the Associated Press. But others are saying that Facebook's controversial approach to ads is creating a public-health crisis.

In an open letter to Facebook sent on Monday, GLAAD joined over 50 well-known LGBTQ groups including the Human Rights Campaign, the American Academy of HIV Medicine and the National Coalition for LGBT Health to publicly condemn the company for putting "real people's lives in imminent danger" by "convincing at-risk individuals to avoid PrEP, invariably leading to avoidable HIV infections."

What Facebook’s policy risks 

Of course, this is not the first time Facebook's policy has faced scrutiny when it comes to false or ambiguous information in its ads. Social media has been both a catalyst and conduit for the rapid-fire spread of misinformation to the world wide web. As lawmakers struggle to enforce order to cyberspace and its creations, Facebook has become a symbol of the threat the internet poses to our institutions and to public safety. For example, the company has refused to take down 2020 election ads, largely funded by the Trump campaign, that spew false information. For this reason, Facebook and other social media platforms present a serious risk to a fundamental necessity of American democracy, public access to truth.

But this latest scandal underlines how the misconstrued information that plagues the web can infect other, more intimate aspects of American lives. Facebook's handling of paid-for claims about the potential health risks of taking Truvada and other HIV medications threatens lives.

"Almost immediately we started hearing reports from front-line PrEP prescribers, clinics and public health officials around the country, saying we're beginning to hear from potential clients that they're scared of trying Truvada because they're seeing all these ads on their Facebook and Instagram feeds," said Peter Staley, a long-time AIDS activist who works with the PrEP4All Collaboration, to The Post.

Unregulated Surveillance Capitalism

To be fair, the distinction between true and false information can be muddy territory. Personal injury lawyers who represent HIV patients claim that the numbers show that the potential risks of medications such as Turvada and others that contain the ingredient antiretroviral tenofovir may exist. This is particularly of note when the medication is used as a treatment for those that already have HIV rather than prevention for those that do not. But the life-saving potential of the HIV medications are unequivocally real. The problem, as some LGBT advocates are claiming, is that the ads lacked vital nuance.

It also should be pointed out that Facebook has taken action against anti-vaccine content and other ads that pose threats to users. Still, the company's dubious policies clearly pose a big problem, and it has shown no signs of adjusting. But perhaps the underlying issue is the failure to regulate what social psychologist Shoshana Zuboff calls "surveillance capitalism" by which people's experiences, personal information, and characteristics become commodities. In this case, paid-for personal-injury legal ads that target users with certain, undisclosed characteristics. It's been said that you should be wary of what you get for free, because it means you've become the product. Facebook, after all, is a business with an end goal to maximize profits.

But why does a company have this kind of power over our lives? Americans and their legislators are ensnared in an existential predicament. Figure out how to regulate Facebook and be accused with endangering free speech, or leave the cyber business alone and risk the public's health going up for sale along with its government.