from the world's big
Why creating an AI that has free will would be a huge mistake
Giving human rights to a being with unlimited knowledge? Probably not a good idea.
Joanna Bryson is a Reader (tenured Associate Professor) at the University of Bath, and an affiliate of Princeton's Center for Information Technology Policy (CITP). She has broad academic interests in the structure and utility of intelligence, both natural and artificial. Venues for her research range from Reddit to Science. She is best known for her work in systems AI and AI ethics, both of which she began during her Ph.D. in the 1990s, but she and her colleagues publish broadly, in biology, anthropology, sociology, philosophy, cognitive science, and politics. Current projects include “The Limits of Transparency for Humanoid Robotics” funded by AXA Research, and “Public Goods and Artificial Intelligence” (with Alin Coman of Princeton University’s Department of Psychology and Mark Riedl of Georgia Tech) funded by Princeton’s University Center for Human Values. Other current research includes understanding the causality behind the correlation between wealth inequality and political polarization, generating transparency for AI systems, and research on machine prejudice deriving from human semantics. She holds degrees in Psychology from Chicago and Edinburgh, and in Artificial Intelligence from Edinburgh and MIT. At Bath, she founded the Intelligent Systems research group (one of four in the Department of Computer Science) and heads their Artificial Models of Natural Intelligence.
Joanna Bryson: First of all there’s the whole question about why is it that we in the first place assume that we have obligations towards robots?
So we think that if something is intelligent, then that’s their special source, that’s why we have moral obligations. And why do we think that?
Because most of our moral obligations, the most important thing to us is each other.
So basically morality and ethics are the way that we maintain human society, including by doing things like keeping the environment okay, you know, making it so we can live.
So, one of the ways we characterize ourselves is as intelligent, and so when we then see something else and say, “Oh it’s more intelligent, well then maybe it needs even more protection.”
In AI we call that kind of reasoning heuristic reasoning: it’s a good guess that will probably get you pretty far, but it isn’t necessarily true.
I mean, again, how you define the term “intelligent” will vary. If you mean by “intelligent” a moral agent, you know, something that’s responsible for its actions, well then, of course, intelligence implies moral agency.
When will we know for sure that we need to worry about robots? Well, there’s a lot of questions there, but consciousness is another one of those words. The word I like to use is “moral patient”. It’s a technical term that the philosophers came up with, and it means, exactly, something that we are obliged to take care of.
So now we can have this conversation.
If you just mean “conscious means moral patient”, then it’s no great assumption to say “well then, if it’s conscious then we need to take care of it”. But it’s way more cool if you can say, “Does consciousness necessitate moral patiency?” And then we can sit down and say, “well, it depends what you mean by consciousness.” People use consciousness to mean a lot of different things.
So one of the things that we did last year, which was pretty cool, the headlines, because we were replicating some psychology stuff about implicit bias—actually the best one is something like “Scientists Show That A.I. Is Sexist and Racist, and It’s Our Fault,” which that’s pretty accurate, because it really is about picking things up from our society.
Anyway, the point was, so here is an AI system that is so human-like that it’s picked up our prejudices and whatever… and it’s just vectors! It’s not an ape. It’s not going to take over the world. It’s not going to do anything, it’s just a representation; it’s like a photograph.
We can’t trust our intuitions about these things.
We give things rights because that’s the best way we can find to handle very complicated situations. And the things that we give rights are basically people.
I mean some people argue about animals, but technically, and again this depends on whose technical definition you use, but technically rights are usually things that come with responsibilities and that you can defend in a court of law.
So normally we talk about animal welfare and we talk about human rights, but with artificial intelligence you can even imagine itself knowing its rights and defending itself in the court of law. But the question is, why would we need to protect the artificial intelligence with rights? Why is that the best way to protect it?
So with humans it’s because we’re fragile, it’s because there’s only one of us. And I actually think—this is horribly reductionist, but I actually think—it’s just the best way that we’ve found to be able to cooperate. It’s sort of an acknowledgment of the fact that we’re all basically the same thing, the same stuff, and we had to come up with, the technical term again is equilibrium, we had to come up with some way to share the planet, and we haven't managed to do it completely fairly (like ‘everybody gets the same amount of space’), but actually we all want to be recognized for our achievements so even completely fair isn’t completely fair, if that makes sense.
And I don’t mean to be facetious there, it really is true that you can’t make all the things you would like out of fairness be true at once.
That’s a fact about the world; it’s a fact about the way we define fairness.
So, given how hard it is to be fair, why should we build AI that needs us to be fair to it?
So what I’m trying to do is just make the problem simpler and focus us on the thing that we can’t help, which is the human condition.
And I’m recommending that if you specify something, if you say okay this is when you really need rights in this context, okay once we’ve established that, don’t build that, okay?
A lot of people this rubs them the wrong way like its because they’ve watched Blade Runner or AI the movie or something like this.
In a lot of these movies we’re not really talking about AI, we’re not talking about something designed from the ground up, we’re talking basically about clones.
And clones are a different situation. If you have something that’s exactly like a person, however it was made, then okay, then it’s exactly like a person and it needs that kind of protection.
But even biological clones, even if you just want to clone yourself, at least in the European Union, that’s illegal. I’m not sure about in America. I think it’s illegal in America too.
But people think it’s unethical to create human clones partly because they don’t want to burden someone with the knowledge that they’re supposed to be someone else, that there was some other person that chose them to be that person. I don’t know if we’ll be able to stick to that, but I would say that AI clones fall into the same category.
If you’re really going to make something and then say, “Hey, congratulations, you’re me and you have to do what I say,” I wouldn’t want myself to tell me what to do, if that makes sense, if there were two of me!
I think we’d like to be able to both be equals, and so you don’t want to have—an artifact is something you’ve deliberately built and that you’re going to own.
If you have something that’s sort of a humanoid servant that you own, then the word for that is slave.
And so I was trying to establish that look, we are going to own anything we build, and so therefore it would be wrong to make it a person, because we’ve already established that slavery of people is wrong and bad and illegal.
And so it never occurred to me that people would take that to mean that “the robots will be people that we just treat really badly. “
It’s like no, that’s exactly the opposite!
So, I already mentioned that if somebody did manage to clone people somehow, which I don’t believe this is ever going to work but people do talk about it and people spending tens of millions of dollars on it, “whole brain uploading”. So I don’t believe it’s possible. I don’t think it’s actually computationally tractable, but if that were to happen then I would be there saying, “Yes this is a person”. But how can we stop that the same way we stop human cloning, which is just to say, “Don’t do that”?
And particularly with AI my point is that it shouldn’t be a commercial product.
So if somebody does this in their basement or something well then we have a few exceptions, but I’m much more concerned about people mass producing such things.
Joanna Bryson thinks that people are confusing artificial intelligence with human clones, mostly due to Hollywood movies like Blade Runner and Steven Spielberg's A.I., both of which feature very humanoid beings. Take away the somewhat cuddly ideas the movies have given us about artificial intelligence and you have this: hyper-smart machines with absolutely no limit to their knowledge. She posits that giving artificial intelligence the same rights a human has could result in pretty dire consequences... because AI has already proven that it can pick up negative human characteristics if those characteristics are in the data. Therefore, it's not crazy at all to think that AI could scan all of Twitter in one afternoon and pick up all the negativity we've unloaded there. If it's already proven it's not only capable of making the wrong decision but eventually will make the wrong decision when it comes to data mining and implementation, why even give it the same powers as us in the first place?
Join The Daily Show comedian Jordan Klepper and elite improviser Bob Kulhan live at 1 pm ET on Tuesday, July 14!
Gender and sexual minority populations are experiencing rising anxiety and depression rates during the pandemic.
- Anxiety and depression rates are spiking in the LGBTQ+ community, and especially in individuals who hadn't struggled with those issues in the past.
- Overall, depression increased by an average PHQ-9 score of 1.21 and anxiety increased by an average GAD-7 score of 3.11.
- The researchers recommended that health care providers check in with LGBTQ+ patients about stress and screen for mood and anxiety disorders—even among those with no prior history of anxiety or depression.
Study findings<p>For the study, <a href="https://link.springer.com/article/10.1007/s11606-020-05970-4" target="_blank">published in the Journal of General Internal Medicine</a><em>, </em>Flentje and her team evaluated survey responses from nearly 2,300 individuals who identified as being in the lesbian, gay, bisexual, transgender, and queer (LGBTQ+) community. Most of the participants were white, while nearly 19 percent identified as a racial or ethnic minority. Multiple genders were represented with cisgender women (27.2 percent) and men (24.6 percent) making up a majority of the participants. Sixty-three percent had been assigned female at birth. For the most part, participants identified their sexual orientations as queer (40.3 percent), gay (36.5 percent), and bisexual (30.3 percent).</p><p>The JGIM study participants were recruited from the 18,000-participant <a href="https://pridestudy.org/" target="_blank">PRIDE Study</a> (Population Research in Identity and Disparities for Equality), which is the first large-scale, long-term national study focusing on American adults who identify as LGBTQ+. It conducts annual questionnaires to understand factors related to health and disease in this population. </p><p>Participants filled out an annual questionnaire (starting in June 2019) and a COVID-19 impact survey this past spring. Flentje noted that on an individual level, some people may not have experienced a big change in anxiety or depression levels, but for others there was. Overall, depression increased by a <a href="https://patient.info/doctor/patient-health-questionnaire-phq-9" target="_blank">PHQ-9 score</a> of 1.21, putting it at 8.31 on average. Anxiety went up by a <a href="https://www.mdcalc.com/gad-7-general-anxiety-disorder-7" target="_blank">GAD-7</a> score of 3.11 to an average of 8.89. Interestingly, the average PHQ-9 scores for those who screened positive for depression at the first 2019 survey decreased by 1.08. Those who screened negative for depression saw their PHQ-9 scores increase by 2.17 on average. As for anxiety, researchers detected no GAD-7 change among the study participants who screened positive for anxiety in the first survey, but did see an overall increase of 3.93 among those who had initially been evaluated as negative for the disorder. </p>
Risks among gender and sexual minorities<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="fc3fd1ae68b77bbbf58a6995638d6d65"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/EnUqDjCqg0A?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span><p>The LGBTQ+ community is a vulnerable population to mental health concerns because of their fear of stigmatization and previous discriminatory experiences.</p> <p>Previous research by the Human Rights Campaign has found "that LGBTQ Americans are more likely than the <a href="https://medicalxpress.com/tags/general+population/" target="_blank">general population</a> to live in poverty and lack access to adequate medical care, paid <a href="https://medicalxpress.com/tags/medical+leave/" target="_blank">medical leave</a>, and basic necessities during the pandemic," said researcher Tari Hanneman, director of the health and aging program at the campaign.</p> <p>"Therefore, it is not surprising to see this increase in anxiety and depression among this population," Hanneman said in the release. "This study highlights the need for <a href="https://medicalxpress.com/tags/health+care+professionals/" target="_blank">health care professionals</a> to support, affirm and provide <a href="https://medicalxpress.com/tags/critical+care/" target="_blank">critical care</a> for the LGBTQ community to manage and maintain their mental health, as well as their physical health, during this pandemic."</p>
What should health care providers do?<p>The authors of the study recommend that health care providers check in with LGBTQ+ patients about stress and screen for mood and anxiety disorders in members of that community—even among those with no prior history of anxiety or depression.</p><p>As cases of COVID-19 continue to mount, the sustained social distancing, potential isolation, economic precariousness, and personal illness, grief, and loss are bound to have increased and varied impacts on mental health. Effective treatments may include individual therapy and medications as well as more large-scale coronavirus support programs like peer-led groups and mindfulness practices. </p><p>"It will be important to find out what happens over time and to identify who is most at risk, so we can be sure to roll out public health interventions to support the mental health of our communities in the best and most effective ways," said Flentje.</p>
What we know about black holes is both fascinating and scary.
- When it comes to black holes, science simultaneously knows so much and so little, which is why they are so fascinating. Focusing on what we do know, this group of astronomers, educators, and physicists share some of the most incredible facts about the powerful and mysterious objects.
- A black hole is so massive that light (and anything else it swallows) can't escape, says Bill Nye. You can't see a black hole, theoretical physicists Michio Kaku and Christophe Galfard explain, because it is too dark. What you can see, however, is the distortion of light around it caused by its extreme gravity.
- Explaining one unsettling concept from astrophysics called spaghettification, astronomer Michelle Thaller says that "If you got close to a black hole there would be tides over your body that small that would rip you apart into basically a strand of spaghetti that would fall down the black hole."
The team caught a glimpse of a process that takes 18,000,000,000,000,000,000,000 years.
- In Italy, a team of scientists is using a highly sophisticated detector to hunt for dark matter.
- The team observed an ultra-rare particle interaction that reveals the half-life of a xenon-124 atom to be 18 sextillion years.
- The half-life of a process is how long it takes for half of the radioactive nuclei present in a sample to decay.
A new study looks at what would happen to human language on a long journey to other star systems.
- A new study proposes that language could change dramatically on long space voyages.
- Spacefaring people might lose the ability to understand the people of Earth.
- This scenario is of particular concern for potential "generation ships".