from the world's big
Creation without consequence: How Silicon Valley made a hot mess of progress
At the dawn of the AI era, where decisions made now could affect the future of mankind, regulation over tech giants is needed now more than ever.
Joanna Bryson is a Reader (tenured Associate Professor) at the University of Bath, and an affiliate of Princeton's Center for Information Technology Policy (CITP). She has broad academic interests in the structure and utility of intelligence, both natural and artificial. Venues for her research range from Reddit to Science. She is best known for her work in systems AI and AI ethics, both of which she began during her Ph.D. in the 1990s, but she and her colleagues publish broadly, in biology, anthropology, sociology, philosophy, cognitive science, and politics. Current projects include “The Limits of Transparency for Humanoid Robotics” funded by AXA Research, and “Public Goods and Artificial Intelligence” (with Alin Coman of Princeton University’s Department of Psychology and Mark Riedl of Georgia Tech) funded by Princeton’s University Center for Human Values. Other current research includes understanding the causality behind the correlation between wealth inequality and political polarization, generating transparency for AI systems, and research on machine prejudice deriving from human semantics. She holds degrees in Psychology from Chicago and Edinburgh, and in Artificial Intelligence from Edinburgh and MIT. At Bath, she founded the Intelligent Systems research group (one of four in the Department of Computer Science) and heads their Artificial Models of Natural Intelligence.
Joanna Bryson: If we're coding AI and we understand that there's moral consequences does that mean the programmer has to understand it? It isn't only the programmer, although we do really think that we need to train programmers to be watching out for these kinds of situations, knowing how to whistle blow, knowing when to whistle blow. There is a problem of people being over-reactive and that costs companies and I understand that, but we also have sort of a Nuremberg situation that we need everybody to be responsible. But ultimately it isn't just about the programmers, the programmers work within the context of a company and the companies work in the context of regulation and so it is about the law, it's about society. One of the things, one of the papers that had come out in 2017 was Professor Alan Winfield was a thing about how if legislatures can't be expected to keep up with the pace of technological change, what they could keep up with is which professional societies do they trust. And they already do this in various disciplines; it's just new for AI. You say you have to achieve the moral standards of at least one professional organization so when they give their rules about what's okay. And that sort of allows you kind of a loose coupling because it's wrong for professional organizations to enforce the law to go after people to sue them, whatever. That's not what professional organizations are for. But it's also sensible it is what professional organizations are for is to keep up with their own field and to set things like codes of conduct. So that's why you want to bring those two things together the executive government and the professional organizations and you can kind of have the legislature join those two together.
This is what I'm working hard to keep in the regulations that it's always people in organizations that are accountable and so then they will be motivated to make sure that they can demonstrate they followed due process, so both of the people who are operating the AI and the people who developed the AI. Because it's like a car, when there's a car accident normally the driver is at fault, sometimes the person they hit is at fault because they did something completely unpredictable. But sometimes the manufacturer did something wrong with the brakes and that's a real problem. So we need to be able to show that the manufacturer followed good practice and it really is the fault of the driver. Or sometimes that there really isn't a fact of the matter like it was an unforeseeable thing in the past, but of course now it's happened so in the future we'll be more careful.
That just happened recently in Europe there was a case where somebody was on... it wasn't like a totally driverless car, but I guess it was cruise control or something it had some extra AI and unfortunately somebody had a stroke. Now what happens a lot and what automobile manufacturers have to look for is falling asleep at the wheel, but this guy had a stroke, which is different from falling asleep. So he was still kind of holding on semi in control but couldn't see anything, hit a family and killed two of the three of the family. And so the survivor was the father and he said he wasn't happy only to get money from insurance or whatever the liability or whatever, he wanted to know that whoever had caused this accident was being held accountable. So there was a big lawsuit and that company absolutely it was a car manufacturing company; they're able to show they followed due process; they had been incredibly careful; this was just like a really unlikely thing to have happen to have that kind of stroke that you'd be holding onto the steering wheel and all these other things. And so it was decided that there was nobody actually at fault. But it could have been different. If Facebook is really moving fast and breaking things then they're going to have a lot of trouble proving that they were doing due diligence when Cambridge Analytica got the data that it got. And so they are very likely to be held to account for anything that's found to have been a negative consequence of that behavior. It's something that computer scientists should want that tech companies should want they should want to be able to show that they've done the right thing.
So everybody who's ever made any code at all knows there's like two different pressures. One is you want to have clean beautiful code that you can understand that's well documented and everything and that's good because then if you ever need to change it or extend it or you want to write some more software that's great you can reuse it, other people can reuse it, maybe you'll get famous for your great code. The other thing is that you want to put stuff out as soon as you can because you can sell it faster, you don't have time, whatever, you want to go do something else or maybe you don't even understand what you're really doing and you've just barely got it to work. Whatever. So those two pressures are always working on each other and it's really, really important for all of our benefit, for society's benefit that we put weight on that side of that nice clean code so we can tell questions like the one I just mentioned questions like who's at fault if data goes the wrong place. So right now that's not the way it's been going. It has been completely sort of Wild West and nobody can tell where the data is going. But with a few lawsuits, with a few big failures I think everyone is going to be motivated to say no I want to show that the AI did the right thing and it was the owner's fault or that we followed due diligence and that was an unforeseeable consequence. They're going to want to prove that stuff. And like I said that's going to benefit not just the companies and not just the owners or operators, but all of us because we want liability to be on the people who are making the decisions and so that's the right way around so that's why we want to maintain human accountability even though the autonomous system is sometimes taking decisions.
The thing that drives me crazy that organizations do wrong about AI right now is when they go in and try to fight regulation by saying you'll lose the magic juice, like deep learning is the only reason we've got AI and if you regulate us then you can't use it because nobody knows what the weights are doing in deep learning. This is completely false. First of all, when you audit a company you don't go and try to figure out how the synapses are connected and their accountants, you just look at the accounts. So in the worst case, we can do the same thing with AI that we could be doing with humans. Now again, this goes back to what I was saying earlier about due diligence, if you have accountants and the accounts are wrong you can try to put the accountant on the stand and say why are the accounts wrong and then you try to establish whether they were doing the right thing at the right time or whatever. You can't do that with AI systems, but if you want to be able to prove that they were honest mistakes you can look at like how is the system trained? How was it being run? There's ways that you can audit whether the system was built appropriately. So I think we should be out looking for those because that also allows us to improve our systems. So the most important thing is just not believing the whole magic line. And one of the companies I've heard give the magic line in a regulatory setting in front of government representatives was Microsoft and that was in early 2017 and now they've completely reversed that. They've sat down, they've thought about it and now they've said accountability and transparency is absolutely core to what we should be doing with AI. I think Microsoft is making really strong efforts to be the adults in the room right now, which is interesting. Like I said just literally within one year there was that change. So I think everybody should be thinking that AI is not this big exception. Don't look for a way to get out of responsibility.
Joanna Bryson isn't a fan of companies that can't hold themselves responsible for their actions. Too many tech companies, she argues, think that they're above the law and that they should create what they want, no matter who it hurts, and have society pick up the pieces later. This libertarian attitude might be fine if the company happens to be a young startup. But if the company is a massive behemoth like Facebook that could easily manipulate 2 billion people worldwide — or influence an election, perhaps — perhaps there should be some oversight. Tech companies, she argues, could potentially create something catastrophic that they can't take back. And at the dawn of the AI era, where decisions made now could affect the future of mankind, regulation over these tech giants is needed now more than ever.
Educators and administrators must build new supports for faculty and student success in a world where the classroom might become virtual in the blink of an eye.
- If you or someone you know is attending school remotely, you are more than likely learning through emergency remote instruction, which is not the same as online learning, write Rich DeMillo and Steve Harmon.
- Education institutions must properly define and understand the difference between a course that is designed from inception to be taught in an online format and a course that has been rapidly converted to be offered to remote students.
- In a future involving more online instruction than any of us ever imagined, it will be crucial to meticulously design factors like learner navigation, interactive recordings, feedback loops, exams and office hours in order to maximize learning potential within the virtual environment.
New study shows white dwarf stars create an essential component of life.
- White dwarf stars create carbon atoms in the Milky Way galaxy, shows new study.
- Carbon is an essential component of life.
- White dwarfs make carbon in their hot insides before the stars die.
What Are White Dwarf Stars?<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="7b046e546ce994682b2553a8c978eb32"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/77a1KSxfaR0?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span>
The renowned magician recently joined Big Think CEO and cofounder Victoria Brown for a wide-ranging discussion.
- Penn Jillette is an American magician best known for his work as part of the magic duo Penn and Teller.
- Jillette has also written eight books, co-hosted the Showtime show "Bullshit," and produced the film "Tim's Vermeer."
- In the interview, Jillette talks about how libertarianism has been distorted in the U.S., and why the democratization of media hasn't produced a utopia.
How being businesslike — not affectionate — can build strong friendships<p>Jillette has been collaborating with the magician and filmmaker Teller for 44 years on their magic act, currently stationed out of Las Vegas. In all that time, Jillette says their friendship has been more businesslike than affectionate.<br></p><p style="margin-left: 20px;">"There's just some people you just want to be with and there's that cuddly feeling," Jillette said. "And there's other people who your relationship would be identical if it were over email, totally intellectual." </p><p>The pair's relationship is decidedly the latter. </p><p style="margin-left: 20px;">"Teller and I have never had any affection for one another," Jillette said. "No desire to hug. We only shake hands when it's part of a script. We don't seek out each other's company, but there's no one that I respect more and I believe at a core level that I do better stuff with Teller than I do alone."</p><p>But that's not to say that relationships like these are entirely about business.</p><p style="margin-left: 20px;">"It turns out respect is more enduring than love," he said. "Now, I have to add here that my daughter whenever I say this gets very, very bothered because she says that Teller is my BFF and there's no way around that and that's absolutely true. I'm saying that in a kind of skeletal way. The truth is that Teller's my best friend over all those years."</p><p>Jillette's description of this type of relationship sounds a bit like Aristotle's idea of the "friendship of the good." </p><p>The Greek philosopher outlined three types of friendship, each based on a different feeling or value: pleasure, utility, and "good." Aristotle thought the "friendship of the good" was the best kind of relationship, because it's built on the respect and admiration for the virtues each friend sees in the other. Aristotle believed these friendships might not form quickly, but <a href="https://www.sparknotes.com/philosophy/ethics/section8/" target="_blank">they tend to be longer lasting than the other types</a>.</p>
Why refusing to wear a mask is not a libertarian idea<p>Libertarianism is "the belief that peace, prosperity and social harmony are fostered by as much liberty as possible and as little government as necessary" according to the <a href="https://theihs.org/who-we-are/what-is-libertarian/" target="_blank">Institute for Human Studies</a> at George Mason University. But when this impulse toward individual freedom becomes too rigid, it can pose problems for a society that needs to work together to navigate a nationwide problem, like a pandemic.<br></p><p>Since COVID-19 began spreading across the U.S., there's been a portion of <a href="https://www.washingtonpost.com/national/coronavirus-masks-america/2020/04/18/bdb16bf2-7a85-11ea-a130-df573469f094_story.html" target="_blank">Americans who say it's un-American</a> for the government to try to force (or, more accurately in most cases, <em>ask</em>) citizens to wear masks in public. Here, Jillette distinguishes between <a href="https://www.open.edu/openlearn/ocw/mod/oucontent/view.php?printable=1&id=1747" target="_blank">positive and negative freedoms</a>, most commonly defined as <em>freedom to </em>and <em>freedom from.</em></p><p style="margin-left: 20px;">"Libertarianism has been so distorted," Jillette said. "I mean I don't know if I have to pull my name out of that ring. It's been adopted by people who don't seem to hold the responsibility side of it and don't seem to hold the compassion side of it."</p><p style="margin-left: 20px;">"I can see arguments for not wearing seatbelts and I can see arguments for not wearing motorcycle helmets but I cannot see any argument for driving drunk. And that is what not wearing a mask is. It's not risking yourself. It's risking the people around you which I don't see a way that that's your right."</p>
How removing media gatekeepers didn't lead to utopia<p><span style="background-color: initial;">How did the democratization and decentralization of the media change the world? In the 1990s, Jillette might have said that removing media gatekeepers would produce a sort of open, meritocratic utopia: you have an interesting idea, you throw it online, and it spreads all over the world.</span><br></p><p>But that's not quite what happened.</p><p style="margin-left: 20px;">"I thought getting rid of the gatekeepers could be nothing but good," Jillette said. "And now it seems like getting rid of the gatekeepers gave us Trump as president and in the same breath, in the same wind, gave us not wearing masks and maybe gave us a huge unpleasant amount of overt racism."</p><p>It also gave us cancel culture. But Jillette said he "can't even rant against cancel culture," because there's no obvious way to fix it without obstructing free speech rights. After all, it's a good thing that victimized people are now able to go online, post grievances, and (sometimes) see justice delivered, whereas in the past they had to file their complaints with a series of gatekeepers. But simultaneously, this unmanaged system leaves it vulnerable for abuse.</p><p style="margin-left: 20px;">"Now you could be obviously lying and still have a million-and-a-half people believe you and do real damage to the person that you said wrong to," Jillette said.</p>
A leading British space scientist thinks there is life under the ice sheets of Europa.
- A British scientist named Professor Monica Grady recently came out in support of extraterrestrial life on Europa.
- Europa, the sixth largest moon in the solar system, may have favorable conditions for life under its miles of ice.
- The moon is one of Jupiter's 79.
Neil deGrasse Tyson wants to go ice fishing on Europa<div class="rm-shortcode" data-media_id="GLGsRX7e" data-player_id="FvQKszTI" data-rm-shortcode-id="f4790eb8f0515e036b24c4195299df28"> <div id="botr_GLGsRX7e_FvQKszTI_div" class="jwplayer-media" data-jwplayer-video-src="https://content.jwplatform.com/players/GLGsRX7e-FvQKszTI.js"> <img src="https://cdn.jwplayer.com/thumbs/GLGsRX7e-1920.jpg" class="jwplayer-media-preview" /> </div> <script src="https://content.jwplatform.com/players/GLGsRX7e-FvQKszTI.js"></script> </div>
Water Vapor Above Europa’s Surface Deteced for First Time<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="9c4abc8473e1b89170cc8941beeb1f2d"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/WQ-E1lnSOzc?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span>
Master negotiator Chris Voss breaks down how to get what you want during negotiations.
- Former FBI negotiator Chris Voss explains how forced empathy is a powerful negotiating tactic.
- The key is starting a sentence with "What" or "How," causing the other person to look at the situation through your eyes.
- What appears to signal weakness is turned into a strength when using this tactic.
3 Tips on Negotiations, with FBI Negotiator Chris Voss | Best of '16 | Big Think<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="b86d518e9f0c9f9d7a7c686e07798152"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/-FLlBchonwM?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span><p>This question forces a response, but—and this is key—the other person has to consider your side of the argument. They have to look at the situation from your perspective if they hope to offer a solution.</p><p>Offering a real-world example, Voss mentions coaching a high-end real estate agent. They were leasing an expensive home in the Hollywood Hills. The first time the negotiators asked the "how" question, the leasing agent relented on a number of terms. A little while later, they asked again. This time, the agent said, "If you want the house you're going to have to do it," signaling that the end of negotiations had been reached. </p><p>Voss says that "how" is not the only word that works. "What" is also a powerful entry into negotiations, such as "What am I supposed to do?" Again, you're forcing the other person to empathize. </p><p>This is a particularly tricky skill during a time when most conversations are online. Nuance is impossible without the immediacy of pantomimes and vocal fluctuations. Whataboutism is too easy an escape. </p>
Aikido Morihei Ueshiba (1883 - 1969, standing, centre left), founder of the Japanese martial art of aikido, demonstrating his art with a follower, at the opening ceremony of the newly-opened aikido headquarters, Hombu Dojo, in Shinjuku, Tokyo, 1967.
(Photo by Keystone/Hulton Archive/Getty Images)<p>Online debates often amount to little more than frustrated individuals pulling out their hair. In his book, "Against Empathy," Yale psychology professor Paul Bloom writes that effective altruists are able to focus on what really matters in everyday life.</p><p>For example, he compares politics to sports. Rooting for your favorite team isn't based in rationality. If you're a Red Sox fan, Yankees stats don't matter. You just want to destroy them. This, he believes, is how most people treat politics. "They don't care about truth because, for them, it's not really about truth."</p><p>Bloom writes that if his son believed our ancestors rode dinosaurs, it would horrify him, but "I can't think of a view that matters less for everyday life." We have to strive for rationality when the stakes are high. When involved in real decision-making processes that will affect their life, people are better able to express ideas and make arguments, and are more receptive to opposing ideas. </p><p>Because we "become inured to problems that seem unrelenting," it's imperative to make the problem seem immediate. As Voss says, giving the other side "the illusion of control" is one way of accomplishing this, as it forces them to take action. When people feel out of control, negotiations are impossible. People dig their heels in and refuse to budge. </p><p>What seems to be weakness is actually a strength. To borrow another martial arts metaphor, negotiations are like aikido: using your opponent's force against them while also protecting them from injury. Forcing empathy is one way to accomplish this task. You may get more than you ask for without the other side ever realizing they surrendered anything.</p><p>--</p><p><em>Stay in touch with Derek on <a href="http://www.twitter.com/derekberes" target="_blank">Twitter</a>, <a href="https://www.facebook.com/DerekBeresdotcom" target="_blank">Facebook</a> and <a href="https://derekberes.substack.com/" target="_blank">Substack</a>. His next book is</em> "<em>Hero's Dose: The Case For Psychedelics in Ritual and Therapy."</em></p>