from the world's big
Would You Buy a Car That’s Programmed to Kill You? You Just Might.
Author and entrepreneur Jerry Kaplan offers an interesting crash course on computational ethics, the idea that robots and machines will require programming to make them cognizant of morals, decorum, manners, and various other social nuances.
Jerry Kaplan is widely known in the computer industry as a serial entrepreneur, inventor, scientist, and author. He is currently a Fellow at The Stanford Center for Legal Informatics. He also teaches Philosophy, Ethics, and Impact of Artificial Intelligence in the Computer Science Department, Stanford University.
Kaplan co-founded several ventures including Winster.com (social games); Onsale.com (online auctions); GO Corporation (tablet computers); and Teknowledge (expert systems). He wrote a best-selling non-fiction novel entitled “Startup: A Silicon Valley Adventure”, selected by Business Week as one of the top ten business books of the year, and optioned to Sony Pictures, with translations available in Japanese, Chinese, and Portuguese. His latest book is titled Humans Need Not Apply.
Kaplan co-invented numerous products including the Synergy (first all-digital keyboard instrument, used for the soundtrack of the movie TRON); Lotus Agenda (first personal Information manager); PenPoint (tablet operating system used in the first smartphone, AT&T's EO 440); the GO computer (first tablet computer) and Straight Talk (Symantec Corporation's first natural language query system). He is also co-inventor of the online auction (patents now owned by eBay) and is named on 12 U.S. patents.
He has published papers in refereed journals including Artificial Intelligence, Communications of the ACM, Computer Music Journal, The American Journal of Computational Linguistics, and ACM Transactions on Database Systems.
Kaplan was awarded the 1998 Ernst & Young Entrepreneur of the Year, Northern California; served on the Governor’s Electronic Commerce Advisory Council Member under Pete Wilson, Governor of California (1999); and received an Honorary Doctorate of Business Administration from California International Business University, San Diego, California (2004).
He has been profiled in The New York Times, The Wall Street Journal, Forbes, Business Week, Red Herring, and Upside, and is a frequent public speaker.
Jerry Kaplan: As machines become increasingly autonomous, by which I mean they can sense their environment and they can make decisions about what to do or what not to do. Of course it’s based on the programming and their experience. But we don’t have as direct control over what they do as we do today with the kinds of technology that we have. Now there’s a couple of very interesting consequences of that. One of them is that they’re going to be faced with having to make ethical decisions. I’ll call it ethics junior is just making socially appropriate decisions. So we’re taking machines and we’re putting them in situations where they’re around people. And something that we take for granted and it seems so natural that machines do not take for granted and do not find natural is the normal kinds of social courtesies and conventions that we operate by in dealing with other people. You don’t want to have a robot that’s making a delivery run down the sidewalk and everybody’s got to get out of the way. It has to be able to walk in a crowd in a socially appropriate way. Your autonomous car, right. There are lots of very interesting ethical conundrums that come up, but a lot of them are just social. Okay it pulls up to the crosswalk. Should you cross? Wait? How’s it going to signal you? It’s right now the social conventions should make eye contact with the driver and they tell you whether to cross.
Now I can’t make eye contact with an autonomous car, so there are lots of these sort of rough edges around how machines ought to be able to behave. And the situations are highly variable. You can’t just make a list of them and say do this and do that. We need to program into these devices some fairly general principles. You can call it ethical if you like, which will allow them to guide their own behavior in ways and in directions that are consistent with the expectations that we have in society.
Now I’m teaching at Stanford and I can tell you I haven’t seen anything about this in the engineering curriculum. There’s how to be an ethical engineer, but there isn’t how do you build a device to be ethical. This is a completely new area. It’s sometimes goes by the name of moral programming, computational ethics. There’s some excellent books on this subject. But unfortunately if you read those books, which I have to do because that’s my job, they’re mostly pointing out the problems. Nobody has a really good scheme for how to go about doing this. So we need to develop an engineering discipline of computational ethics and we need to have course sequences in our engineering schools that teach how to get machines to behave appropriately in a wide variety of new circumstances.
Let me point out some of the more serious kinds of conundrums just to give you a feel for it and then others that are just inconveniences, okay. On the very serious side there’s a classic philosophical debate that goes on over what’s called the trolley problem. And the trolley problem is basically you’re in a trolley and there’s a track that splits. If you take no action the trolley is going to go to the right and there are four people on the track and it’s going to kill those people. You can flip a switch and it’ll go down the left track and there’s only one person on that track. The ethical question is: Is it ethical to flip that switch? It is true that the loss of life would be minimized, but it is also true that you now had taken an action to kill somebody. And if you’re that person, you may not think that’s the right thing to do. So philosophers have been studying this and many variations and there’s a lot of very subtle and interesting work that goes on in this. But this is about to become very real because autonomous cars will face exactly these kinds of decisions. So I’m going to buy an autonomous car and I’m in the car. I’m the one guy. And there may be circumstances in which there are four lives, four people, in front of the car in some way. And to save their lives my car has to drive off the edge of the bridge.
There’s a philosophical theory called utilitarianism, which has been around for a couple of centuries at least that would say that maximizing the good for society is my car should kill me. But I’m not buying that car. And so we have a conundrum here. I don’t want to see people buy a Ford instead of a Chevy because the Ford is more likely to save my life no matter what and the Chevy is going to be a little more forgiving of that. And it might kill me to save the lives of other people. I don’t want that to be a selling point in cars. So we need to have a societal discussion over how does this work. To demonstrate why that is so interesting I’ll just give you a little twist on what I said. Right now we’re talking about me buying an autonomous car. But let’s suppose I’m signed up for the great Uber network in the sky of the future and cars are coming and whatever. And I don’t own that car. Now I feel a little bit differently about it because it’s not my car. I’m just like I’m getting on a train. You would never allow a train to — the people on the train to vote, you know, like a gal on my car killed and to kill me and not that one.
There are certain — then it makes more sense for the societal average interests to be operational. So when I think about this issue, even the fact of who owns the car changes my own moral judgment about this particular kind of an issue. Well we need to be able to take these kinds of principles, talk about them, vet them, and put them into cars. So autonomous driving cars has got a number of different issues that are very, very important. Now so far I’ve just talked about life and death. But there’s lots of shades of gray in between that are really quite different. In fact I’m going to make an argument to you today that we’re already down this path and we haven’t even recognized yet for a very interesting reason. Because in order to avoid pointing out this problem the car manufacturers do not talk about this as artificial intelligence. Let me give you an example. A common function in cars is ABS — adaptive braking systems I think is what that stands for. And what that will do is if it can detect, which it can, that you’re about to skid it’s going to pump the brakes and do various things to maintain control of the car and keep it going in a particular direction.
Okay now what you might not know is that ABS in many cases on certain surfaces has a longer stopping distance than if you just jammed on the brakes, locked them and the car spun around. So imagine you’re driving your car and oh my god, there’s a kid in the middle of the road. And you just want that car to stop as quickly as it can and you slam on the brakes. Well the car is going to prioritize keeping going straight over running over that kid in today’s technology. There were circumstances in which that decision, which an engineer made a while back in designing that system, — we want to keep the car stable. You no longer have the freedom to make the decision. I don’t mind if the car spins out of control as long as I miss that kid. So now imagine that the ABS function had been described as we’re simulating the actions of a professional driver. Now we’re taking that judgment and we’re programming it to a machine using these advanced artificial intelligence techniques so that the car can keep under control the say way a professional driver might. Well we might have felt a little bit differently about that if I presented you with that example and we were talking about in as an AI technology. But by saying it’s simply a function of the car and it’s like every other function, you know, it’s like the turn signals and everything else. This issue never really got raised. It never really got vetted. But as we look to the future autonomous driving it’s going to be a problem.
Let me move on through to less severe situations. You’re in your autonomous car and it pulls, you’re on a two-lane street and this happens all the time. There’s a UPS truck right in front of you just come to a stop. The guy jumps out, he opens up the back, grabs the package, and starts heading off. Now you as a driver are permitted a certain amount of latitude in how you behave. And what would you do? You look around it; you go across the double yellow line; and you pass that UPS truck. It’s perfectly acceptable behavior. May I point out your breaking a rule. You’re crossing a double yellow line. If we were to program our cars simply to say you’re never supposed to cross a double yellow line. That car is going to sit there until the guy is done which might be a very long time if he’s gone to his lunch. So the kinds of latitude that we permit people in their behavior in a lot of these circumstances to be able to break rules or bend rules in a very appropriate way — we need to talk about whether it’s okay for a car to engage in that kind of behavior.
Let me give you another one. What would you feel if you went down to the movie theater and there are scarce tickets available and all of a sudden you find there’s 16 robots in line in front of you and you’re at the back of the line. You might actually be like wait a minute, that’s not fair. Why do we have 16 robots and they’re going to pick up tickets for whoever owns the robots. I’m here, you know. We should prioritize me over those robots. I think when that begins to happen in practice people will be up in arms because they can see what is actually happening. But that same situation is already happening today. If you try to get a ticket to Billy Joel at Madison Square Garden and scalpers run programs that snap up all of those tickets in a matter of seconds leaving all the humans who are sitting there trying to press the return button and god forbid fill out the little CAPTCHA. They don’t get stuff. So it’s exactly the same situation. The robots are owned and working for somebody else are grabbing an asset before you have an opportunity or a fair chance to acquire that asset to get that particular ticket, you know. And if you could see that people, would be really made today but it’s invisible because all this stuff is in the cloud. So we’re already facing a lot of these same ethical and social issues, but they’re not as visible as they need to be for us to have a meaningful public discussion about these particular topics.
How will the computer controlling your automated car interact with pedestrians? Who will teach robots what's socially acceptable behavior and what is not? These are the sorts of questions on the minds of people like Jerry Kaplan, who in this video offers an interesting crash course on computational ethics. Robots and machines are going to need programming that makes them cognizant of decorum, manners, and various other social nuances. And as Kaplan notes, no one is really quite certain how it's all going to be done. This is because any technology that takes accountability and decision-making away from human "operators" is innately going to be drenched in uncomfortable, uncertain philosophical dilemmas. These are big issues that require a thorough social discussion. What are we willing to accept? Where do we draw the line? There might come a day when artificial intelligence is able to answer these questions by itself. Until then, we're responsible for shaping A.I. to suit our still-to-be-determined values.
Innovation in manufacturing has crawled since the 1950s. That's about to speed up.
SEAL training is the ultimate test of both mental and physical strength.
- The fact that U.S. Navy SEALs endure very rigorous training before entering the field is common knowledge, but just what happens at those facilities is less often discussed. In this video, former SEALs Brent Gleeson, David Goggins, and Eric Greitens (as well as authors Jesse Itzler and Jamie Wheal) talk about how the 18-month program is designed to build elite, disciplined operatives with immense mental toughness and resilience.
- Wheal dives into the cutting-edge technology and science that the navy uses to prepare these individuals. Itzler shares his experience meeting and briefly living with Goggins (who was also an Army Ranger) and the things he learned about pushing past perceived limits.
- Goggins dives into why you should leave your comfort zone, introduces the 40 percent rule, and explains why the biggest battle we all face is the one in our own minds. "Usually whatever's in front of you isn't as big as you make it out to be," says the SEAL turned motivational speaker. "We start to make these very small things enormous because we allow our minds to take control and go away from us. We have to regain control of our mind."
Is focusing solely on body mass index the best way for doctor to frame obesity?
- New guidelines published in the Canadian Medical Association Journal argue that obesity should be defined as a condition that involves high body mass index along with a corresponding physical or mental health condition.
- The guidelines note that classifying obesity by body mass index alone may lead to fat shaming or non-optimal treatments.
- The guidelines offer five steps for reframing the way doctors treat obesity.
A new 5-step system for treating obesity<p>To help primary care practitioners better treat obesity, the doctors outlined five steps:</p><ol><li>Recognition of obesity as a chronic disease by health care providers, who should ask the patient permission to offer advice and help treat this disease in an unbiased manner.</li><li>Assessment of an individual living with obesity, using appropriate measurements, and identifying the root causes, complications and barriers to obesity treatment.</li><li>Discussion of the core treatment options (medical nutrition therapy and physical activity) and adjunctive therapies that may be required, including psychological, pharmacologic and surgical interventions.</li><li>Agreement with the person living with obesity regarding goals of therapy, focusing mainly on the value that the person derives from health-based interventions.</li><li>Engagement by health care providers with the person with obesity in continued follow-up and reassessments, and encouragement of advocacy to improve care for this chronic disease.</li></ol><p>Insider noted that some health professionals and body-positive advocates don't think the guidelines go far enough in reframing obesity treatment. The update still points "to individual bodies as the problem, not culture," registered dietitian <a href="https://www.bodykindnessbook.com/" target="_blank">Rebecca Scritchfield</a>, told <a href="https://www.insider.com/canada-doctors-obesity-should-be-defined-by-health-not-weight-2020-8" target="_blank">Insider</a>.</p><p>But it's also possible to see how some health professionals may worry this new model could discourage patients from taking the initiative to tackle weight-loss on their own, through exercise and dieting.</p><p>In a 2020 opinion piece published in <a href="https://www.frontiersin.org/articles/10.3389/fnut.2020.00002/full" target="_blank">Frontiers in Nutrition</a>, Dr. <a href="https://www.frontiersin.org/people/u/69229" target="_blank">Elliot M. Berry</a> argued that misplaced "medical and political correctness" may lead to the abrogation of the physician's responsibility to properly care for patients.</p><p style="margin-left: 20px;">"For example, some doctors are now even reluctant to raise the issue of obesity lest they be accused of fat shaming by not accepting their patients' proportions (despite the quote at the head of this opinion piece), and thereby receive poor approval ratings in an atmosphere where popularity is equated with good healthcare."</p><p>Berry offers a list of nine steps that he thinks could help the healthcare industry better treat obesity, without shaming patients or falling prey to political correctness.</p>
Here's why you might eat greenhouse gases in the future.
- The company's protein powder, "Solein," is similar in form and taste to wheat flour.
- Based on a concept developed by NASA, the product has wide potential as a carbon-neutral source of protein.
- The man-made "meat" industry just got even more interesting.
Seriously sustainable<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8xOTk0MDIzNS9vcmlnaW4ucG5nIiwiZXhwaXJlc19hdCI6MTYyMjM4NTMzMX0.BCEfYnn6C3z1zUHIS38xOWjXktgamNBi5iyqklSMYK8/img.png?width=980" id="ea524" class="rm-shortcode" data-rm-shortcode-id="50533380eeb18eb5833b6b6aa3abec38" data-rm-shortcode-name="rebelmouse-image" />
Image source: Solar Foods<p>Solar Foods makes Solein by extracting CO₂ from air using <a href="https://www.fastcompany.com/90356326/we-have-the-tech-to-suck-co2-from-the-air-but-can-it-suck-enough-to-make-a-difference" target="_blank">carbon-capture technology</a>, and then combines it with water, nutrients and vitamins, using 100 percent renewable solar energy from partner <a href="https://www.fortum.com" target="_blank">Fortum</a> to promote a natural fermentation process similar to the one that produces yeast and lactic acid bacteria.</p><p>When the company claims its single-celled protein is "free from agricultural limitations," they're not kidding. Being produced indoors means Solar Foods is not dependent on arable land, water (i.e., rain), or favorable weather.</p><p>The company is already working with the European Space Agency to develop foods for off-planet production and consumption. (The idea for Solein actually began at NASA.) They also see potential in bringing protein production to areas whose climate or ground conditions make conventional agriculture impossible.</p><p>And let's not forget all those <a href="https://www.bk.com/menu-item/impossible-whopper" target="_blank">beef-free burgers</a> based on pea and soy proteins currently gaining popularity. The environmental challenge of scaling up the supply of those plants to meet their high demand may provide an opening for the completely renewable Solein — the company could provide companies that produce animal-free "meats," such as <a href="https://www.beyondmeat.com/products/" target="_blank">Beyond Meat</a> and <a href="https://impossiblefoods.com" target="_blank">Impossible Foods</a>, a way to further reduce their environmental impact.</p>
The larger promise<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8xOTk0MDI0MS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY1NjU4MTg2OX0.7dZZYT5WEV_EupBuLVFwHynarTiz8RYR9aJtC6Ts2C4/img.jpg?width=980" id="3415d" class="rm-shortcode" data-rm-shortcode-id="2e6eebe06d795f844752f9e9d30040d7" data-rm-shortcode-name="rebelmouse-image" />
Image source: Solar Foods<p>The impact of the beef — and for that matter, poultry, pork, and fish — industries on our planet is widely recognized as one of the main drivers behind climate change, pollution, habitat loss, and antibiotic-resistant illness. From the cutting down of rainforests for cattle-grazing land, to runoff from factory farming of livestock and plants, to the disruption of the marine food chain, to the overuse of antibiotics in food animals, it's been disastrous.</p><p>The advent of a promising source of protein derived from two of the most renewable things we have, CO₂ and sunlight, <a href="https://solarfoods.fi/environmental-impact/" target="_blank">gets us out of the planet-destruction business</a> at the same time as it offers the promise of a stable, long-term solution to one of the world's most fundamental nutritional needs.</p>
Solar Foods' timetable<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8xOTk0MTEzMS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTU5OTU1OTMwMn0.wnXh56iO_77x2XKV2uIPf78BKw4AJLUpmiyq_JBVGvo/img.jpg?width=1245&coordinates=172%2C146%2C62%2C135&height=700" id="0297c" class="rm-shortcode" data-rm-shortcode-id="125c9a98ec818f5c241fa28ef1423e67" data-rm-shortcode-name="rebelmouse-image" />
Image source: Lubsan / Shutterstock / Big Think<p>While company plans are always moderated by unforeseen events — including the availability of sufficient funding — Solar Foods plans a global commercial rollout for Solein in 2021 and to be producing two million meals annually, with a revenue of $800 million to $1.2 billion by 2023. By 2050, they hope to be providing sustenance to 9 billion people as part of a $500 billion protein market.</p><p>The project began in 2018, and this year, they anticipate achieving three things: Launching Solein (check), beginning the approval process certifying its safety as a Novel Food in the EU, and publishing plans for a 1,000-metric ton-per-year factory capable of producing 500 million meals annually.</p>
The protein powder Solein. Image source: SOLAR FOODS
Pandemic-inspired housing innovation will collide with techno-acceleration.