Would You Buy a Car That’s Programmed to Kill You? You Just Might.
Author and entrepreneur Jerry Kaplan offers an interesting crash course on computational ethics, the idea that robots and machines will require programming to make them cognizant of morals, decorum, manners, and various other social nuances.
Jerry Kaplan is widely known in the computer industry as a serial entrepreneur, inventor, scientist, and author. He is currently a Fellow at The Stanford Center for Legal Informatics. He also teaches Philosophy, Ethics, and Impact of Artificial Intelligence in the Computer Science Department, Stanford University.
Kaplan co-founded several ventures including Winster.com (social games); Onsale.com (online auctions); GO Corporation (tablet computers); and Teknowledge (expert systems). He wrote a best-selling non-fiction novel entitled “Startup: A Silicon Valley Adventure”, selected by Business Week as one of the top ten business books of the year, and optioned to Sony Pictures, with translations available in Japanese, Chinese, and Portuguese. His latest book is titled Humans Need Not Apply.
Kaplan co-invented numerous products including the Synergy (first all-digital keyboard instrument, used for the soundtrack of the movie TRON); Lotus Agenda (first personal Information manager); PenPoint (tablet operating system used in the first smartphone, AT&T's EO 440); the GO computer (first tablet computer) and Straight Talk (Symantec Corporation's first natural language query system). He is also co-inventor of the online auction (patents now owned by eBay) and is named on 12 U.S. patents.
He has published papers in refereed journals including Artificial Intelligence, Communications of the ACM, Computer Music Journal, The American Journal of Computational Linguistics, and ACM Transactions on Database Systems.
Kaplan was awarded the 1998 Ernst & Young Entrepreneur of the Year, Northern California; served on the Governor’s Electronic Commerce Advisory Council Member under Pete Wilson, Governor of California (1999); and received an Honorary Doctorate of Business Administration from California International Business University, San Diego, California (2004).
He has been profiled in The New York Times, The Wall Street Journal, Forbes, Business Week, Red Herring, and Upside, and is a frequent public speaker.
Jerry Kaplan: As machines become increasingly autonomous, by which I mean they can sense their environment and they can make decisions about what to do or what not to do. Of course it’s based on the programming and their experience. But we don’t have as direct control over what they do as we do today with the kinds of technology that we have. Now there’s a couple of very interesting consequences of that. One of them is that they’re going to be faced with having to make ethical decisions. I’ll call it ethics junior is just making socially appropriate decisions. So we’re taking machines and we’re putting them in situations where they’re around people. And something that we take for granted and it seems so natural that machines do not take for granted and do not find natural is the normal kinds of social courtesies and conventions that we operate by in dealing with other people. You don’t want to have a robot that’s making a delivery run down the sidewalk and everybody’s got to get out of the way. It has to be able to walk in a crowd in a socially appropriate way. Your autonomous car, right. There are lots of very interesting ethical conundrums that come up, but a lot of them are just social. Okay it pulls up to the crosswalk. Should you cross? Wait? How’s it going to signal you? It’s right now the social conventions should make eye contact with the driver and they tell you whether to cross.
Now I can’t make eye contact with an autonomous car, so there are lots of these sort of rough edges around how machines ought to be able to behave. And the situations are highly variable. You can’t just make a list of them and say do this and do that. We need to program into these devices some fairly general principles. You can call it ethical if you like, which will allow them to guide their own behavior in ways and in directions that are consistent with the expectations that we have in society.
Now I’m teaching at Stanford and I can tell you I haven’t seen anything about this in the engineering curriculum. There’s how to be an ethical engineer, but there isn’t how do you build a device to be ethical. This is a completely new area. It’s sometimes goes by the name of moral programming, computational ethics. There’s some excellent books on this subject. But unfortunately if you read those books, which I have to do because that’s my job, they’re mostly pointing out the problems. Nobody has a really good scheme for how to go about doing this. So we need to develop an engineering discipline of computational ethics and we need to have course sequences in our engineering schools that teach how to get machines to behave appropriately in a wide variety of new circumstances.
Let me point out some of the more serious kinds of conundrums just to give you a feel for it and then others that are just inconveniences, okay. On the very serious side there’s a classic philosophical debate that goes on over what’s called the trolley problem. And the trolley problem is basically you’re in a trolley and there’s a track that splits. If you take no action the trolley is going to go to the right and there are four people on the track and it’s going to kill those people. You can flip a switch and it’ll go down the left track and there’s only one person on that track. The ethical question is: Is it ethical to flip that switch? It is true that the loss of life would be minimized, but it is also true that you now had taken an action to kill somebody. And if you’re that person, you may not think that’s the right thing to do. So philosophers have been studying this and many variations and there’s a lot of very subtle and interesting work that goes on in this. But this is about to become very real because autonomous cars will face exactly these kinds of decisions. So I’m going to buy an autonomous car and I’m in the car. I’m the one guy. And there may be circumstances in which there are four lives, four people, in front of the car in some way. And to save their lives my car has to drive off the edge of the bridge.
There’s a philosophical theory called utilitarianism, which has been around for a couple of centuries at least that would say that maximizing the good for society is my car should kill me. But I’m not buying that car. And so we have a conundrum here. I don’t want to see people buy a Ford instead of a Chevy because the Ford is more likely to save my life no matter what and the Chevy is going to be a little more forgiving of that. And it might kill me to save the lives of other people. I don’t want that to be a selling point in cars. So we need to have a societal discussion over how does this work. To demonstrate why that is so interesting I’ll just give you a little twist on what I said. Right now we’re talking about me buying an autonomous car. But let’s suppose I’m signed up for the great Uber network in the sky of the future and cars are coming and whatever. And I don’t own that car. Now I feel a little bit differently about it because it’s not my car. I’m just like I’m getting on a train. You would never allow a train to — the people on the train to vote, you know, like a gal on my car killed and to kill me and not that one.
There are certain — then it makes more sense for the societal average interests to be operational. So when I think about this issue, even the fact of who owns the car changes my own moral judgment about this particular kind of an issue. Well we need to be able to take these kinds of principles, talk about them, vet them, and put them into cars. So autonomous driving cars has got a number of different issues that are very, very important. Now so far I’ve just talked about life and death. But there’s lots of shades of gray in between that are really quite different. In fact I’m going to make an argument to you today that we’re already down this path and we haven’t even recognized yet for a very interesting reason. Because in order to avoid pointing out this problem the car manufacturers do not talk about this as artificial intelligence. Let me give you an example. A common function in cars is ABS — adaptive braking systems I think is what that stands for. And what that will do is if it can detect, which it can, that you’re about to skid it’s going to pump the brakes and do various things to maintain control of the car and keep it going in a particular direction.
Okay now what you might not know is that ABS in many cases on certain surfaces has a longer stopping distance than if you just jammed on the brakes, locked them and the car spun around. So imagine you’re driving your car and oh my god, there’s a kid in the middle of the road. And you just want that car to stop as quickly as it can and you slam on the brakes. Well the car is going to prioritize keeping going straight over running over that kid in today’s technology. There were circumstances in which that decision, which an engineer made a while back in designing that system, — we want to keep the car stable. You no longer have the freedom to make the decision. I don’t mind if the car spins out of control as long as I miss that kid. So now imagine that the ABS function had been described as we’re simulating the actions of a professional driver. Now we’re taking that judgment and we’re programming it to a machine using these advanced artificial intelligence techniques so that the car can keep under control the say way a professional driver might. Well we might have felt a little bit differently about that if I presented you with that example and we were talking about in as an AI technology. But by saying it’s simply a function of the car and it’s like every other function, you know, it’s like the turn signals and everything else. This issue never really got raised. It never really got vetted. But as we look to the future autonomous driving it’s going to be a problem.
Let me move on through to less severe situations. You’re in your autonomous car and it pulls, you’re on a two-lane street and this happens all the time. There’s a UPS truck right in front of you just come to a stop. The guy jumps out, he opens up the back, grabs the package, and starts heading off. Now you as a driver are permitted a certain amount of latitude in how you behave. And what would you do? You look around it; you go across the double yellow line; and you pass that UPS truck. It’s perfectly acceptable behavior. May I point out your breaking a rule. You’re crossing a double yellow line. If we were to program our cars simply to say you’re never supposed to cross a double yellow line. That car is going to sit there until the guy is done which might be a very long time if he’s gone to his lunch. So the kinds of latitude that we permit people in their behavior in a lot of these circumstances to be able to break rules or bend rules in a very appropriate way — we need to talk about whether it’s okay for a car to engage in that kind of behavior.
Let me give you another one. What would you feel if you went down to the movie theater and there are scarce tickets available and all of a sudden you find there’s 16 robots in line in front of you and you’re at the back of the line. You might actually be like wait a minute, that’s not fair. Why do we have 16 robots and they’re going to pick up tickets for whoever owns the robots. I’m here, you know. We should prioritize me over those robots. I think when that begins to happen in practice people will be up in arms because they can see what is actually happening. But that same situation is already happening today. If you try to get a ticket to Billy Joel at Madison Square Garden and scalpers run programs that snap up all of those tickets in a matter of seconds leaving all the humans who are sitting there trying to press the return button and god forbid fill out the little CAPTCHA. They don’t get stuff. So it’s exactly the same situation. The robots are owned and working for somebody else are grabbing an asset before you have an opportunity or a fair chance to acquire that asset to get that particular ticket, you know. And if you could see that people, would be really made today but it’s invisible because all this stuff is in the cloud. So we’re already facing a lot of these same ethical and social issues, but they’re not as visible as they need to be for us to have a meaningful public discussion about these particular topics.
How will the computer controlling your automated car interact with pedestrians? Who will teach robots what's socially acceptable behavior and what is not? These are the sorts of questions on the minds of people like Jerry Kaplan, who in this video offers an interesting crash course on computational ethics. Robots and machines are going to need programming that makes them cognizant of decorum, manners, and various other social nuances. And as Kaplan notes, no one is really quite certain how it's all going to be done. This is because any technology that takes accountability and decision-making away from human "operators" is innately going to be drenched in uncomfortable, uncertain philosophical dilemmas. These are big issues that require a thorough social discussion. What are we willing to accept? Where do we draw the line? There might come a day when artificial intelligence is able to answer these questions by itself. Until then, we're responsible for shaping A.I. to suit our still-to-be-determined values.
The real Game of Thrones might be who best leverages the hit HBO show to shape political narratives.
- Sen. Elizabeth Warren argues that Game of Thrones is primarily about women in her review of the wildly popular HBO show.
- Warren also touches on other parallels between the show and our modern world, such as inequality, political favoritism of the elite, and the dire impact of different leadership styles on the lives of the people.
- Her review serves as another example of using Game of Thrones as a political analogy and a tool for framing political narratives.
Upstreamism advocate Rishi Manchanda calls us to understand health not as a "personal responsibility" but a "common good."
- Upstreamism tasks health care professionals to combat unhealthy social and cultural influences that exist outside — or upstream — of medical facilities.
- Patients from low-income neighborhoods are most at risk of negative health impacts.
- Thankfully, health care professionals are not alone. Upstreamism is increasingly part of our cultural consciousness.
- Climate change is no longer a financial problem, just a political one.
- Mitigating climate change by decarbonizing our economy would add trillions of dollars in new investments.
- Public attitudes toward climate change have shifted steadily in favor of action. Now it's up to elected leaders.
SMARTER FASTER trademarks owned by The Big Think, Inc. All rights reserved.