Does conscious AI deserve rights?

If machines develop consciousness, or if we manage to give it to them, the human-robot dynamic will forever be different.

RICHARD DAWKINS: When we come to artificial intelligence and the possibility of their becoming conscious, we reach a profound philosophical difficulty. I am a philosophical naturalist; I'm committed to the view that there is nothing in our brains that violates the laws of physics, there's nothing that could not, in principle, be reproduced in technology. It hasn't been done yet; we're probably quite a long way away from it, but I see no reason why in the future we shouldn't reach the point where a human-made robot is capable of consciousness and of feeling pain.

BABY X: Da. Da.

MARK SAGAR: Yes, that's right. Very good.

BABY X: Da. Da.

MARK SAGAR: Yeah.

BABY X: Da. Da.

MARK SAGAR: That's right.

JOANNA BRYSON: So, one of the things that we did last year, which was pretty cool, the headlines, because we were replicating some psychology stuff about implicit bias—actually, the best one is something like 'Scientists show that AI is sexist and racist and it's our fault,' which, that's pretty accurate because it really is about picking things up from our society. Anyway, the point was, so here is an AI system that is so humanlike that it's picked up our prejudices and whatever and it's just vectors. It's not an ape, it's not going to take over the world, it's not going to do anything, it's just a representation, it's like a photograph. We can't trust our intuitions about these things.

SUSAN SCHNEIDER: So why should we care about whether artificial intelligence is conscious? Well, given the rapid-fire developments in artificial intelligence, it wouldn't be surprising if within the next 30 to 80 years we start developing very sophisticated general intelligences. They may not be precisely like humans, they may not be as smart as us but they may be sentient beings. If they're conscious beings, we need ways of determining whether that's the case. It would be awful if, for example, we sent them to fight our wars, force them to clean our houses, made them essentially a slave class. We don't want to make that mistake, we want to be sensitive to those issues, so we have to develop ways to determine whether artificial intelligence is conscious or not.

ALEX GARLAND: The Turing Test was a test set by Alan Turing, the father of modern computing. He understood that at some point the machines they were working on could become thinking machines as opposed to just calculating machines and he devised a very simple test.

DOMHNALL GLEESON (IN CHARACTER): It's when a human interacts with a computer and if the human doesn't know they're interacting with a computer the test is passed.

DOMHNALL GLEESON: And this Turing Test is a real thing and it's never, ever been passed.

ALEX GARLAND: What the film does is engage with the idea that it will, at some point, happen. The question is what that leads to.

MARK SAGAR: So she can see me and hear me. Hey, sweetheart, smile at Dad. Now, she's not copying my smile, she's responding to my smile. We've got different sorts of neuromodulators, which you can see up here. So, for example, I'm going to abandon the baby, I'm just going to go away and she's going to start wondering where I've gone. And if you watch up where the mouse is you should start seeing cortisol levels and other sorts of neuromodulators rising. She's going to get increasingly—this is a mammalian maternal separation distress response. It's okay sweetheart. It's okay. Aw. It's okay. Hey. It's okay.

RICHARD DAWKINS: This is profoundly disturbing because it goes against the grain to think that a machine made of metal and silicon chips could feel pain, but I don't see why they would not. And so, this moral consideration of how to treat artificially intelligent robots will arise in the future and it's a problem which philosophers and moral philosophers are already talking about.

SUSAN SCHNEIDER: So, suppose we figure out ways to devise consciousness in machines, it may be the case that we want to deliberately make sure that certain machines are not conscious. So, for example, consider a machine that we would send to dismantle a nuclear reactor so we would essentially quite possibly be sending it to its death, or a machine that we'd send to a war zone. Would we really want to send conscious machines in those circumstances? Would it be ethical? You might say well, maybe we can tweak their minds so they enjoy what they're doing or they don't mind sacrifice, but that gets into some really deep-seated engineering issues that are actually ethical in nature that go back to Brave New World, for example, situations where humans were genetically engineered and took a drug called Soma so that they would want to live the lives that they were given. So, we have to really think about the right approach. So, it may be the case that we deliberately devise machines for certain tasks that are not conscious.

MAX TEGMARK: Some people might prefer that their future home helper robot is an unconscious zombie so they don't have to feel guilty about giving it boring chores or powering it down, some people might prefer that it's conscious so that there can be a positive experience in there, and so they don't feel creeped out by this machine just faking it and pretending to be conscious even though it's a zombie.

JOANNA BRYSON: When will we know for sure that we need to worry about robots? Well, there's a lot of questions there, but consciousness is another one of those words. The word I like to use is moral patient; it's a technical term that the philosophers came up with and it means exactly something that we are obliged to take care of. So, now we can have this conversation: If you just mean conscious means moral patient then it's no great assumption to say well then if it's conscious then we need to take care of it. But it's way more cool if you can say: Does consciousness necessitate moral patiency? And then we can sit down and say, well, it depends what you mean by consciousness. People use consciousness to mean a lot of different things.

A lot of people that this rubs them the wrong way, it's because they've watched Blade Runner or AI the movie or something like this, in a lot of these movies we're not really talking about AI, we're not talking about something designed from the ground up, we're talking basically about clones. And clones are a different situation. If you have something that's exactly like a person, however it was made, then okay it's exactly like a person and it needs that kind of protection. But people think it's unethical to create human clones partly because they don't want to burden someone with the knowledge that they're supposed to be someone else, right, that there was some other person that chose them to be that person. I don't know if we'll be able to stick to that but I would say that AI clones fall into the same category. If you were really going to make something and then say, hey, congratulations, you're me and you have to do what I say, I wouldn't want myself to tell me what to do, if that makes sense, if there were two of me. Right? I think we'd like to both be equals and so you don't want to have an artifact of something that you've deliberately built and that you're going to own. If you have something that's sort of a humanoid servant that you own, then the word for that is slave. And so, I was trying to establish that, look, we are going to own anything we build and so therefore it would be wrong to make it a person because we've already established that slavery of people is wrong and bad and illegal. And so, it never occurred to me that people would take that to mean that oh, the robots will be people that we just treat really badly. It's like no that's exactly the opposite.

We give things rights because that's the best way we can find to handle very complicated situations. And the things that we give rights are basically people. I mean some people argue about animals but, technically, and again this depends on who's technical definition you use, but technically rights are usually things that come with responsibilities and that you can defend in a court of law. So, normally we talk about animal welfare and we talk about human rights. But with artificial intelligence you can even imagine itself knowing its rights and defending itself in the court of law, but the question is why would we need to protect the artificial intelligence with rights? Why is that the best way to protect it? So, with humans it's because we're fragile, it's because there's only one of us and, I actually think this is horribly reductionist, but I actually think it's just the best way that we've found to be able to cooperate. It's sort of an acknowledgment of the fact that we're all basically the same thing, the same stuff, and we had to come up with some kind of, the technical term, again, is equilibrium; we had to come up with some way to share the planet and we haven't managed to do it completely fairly, like everybody gets the same amount of space, but actually we all want to be recognized for our achievements so even completely fair isn't completely fair, if that makes sense. And I don't mean to be facetious there, it really is true that you can't make all the things you would like out of fairness be true at once. That's just a fact about the world, it's a fact about the way we define fairness. So, given how hard it is to be fair, why should we build AI that needs us to be fair to it? So, what I'm trying to do is just make the problem simpler and focus us on the thing that we can't help, which is the human condition. And I'm recommending that if you specify something, if you say okay this is when you really need rights in this context, okay, once we've established that don't build that.

PETER SINGER: Exactly where we would place robots would depend on what capacities we believe they have. I can imagine that we might create robots that are limited to the intelligence level of nonhuman animals, perhaps not the smartest nonhuman animals either, they could still perform routine tasks for us, they could fetch things for us on voice command. That's not very hard to imagine. But I don't think that that would be a sentient being necessarily. And so, if it was just a robot that we understood how exactly that worked, it's not very far from what we have now, I don't think it would be entitled to any rights or moral status. But if it was at a higher level than that, if we were convinced that it was a conscious being, then the kind of moral status it would have would depend on exactly what level of consciousness and what level of awareness. Is it more like a pig for example? Well, then it should have the same rights as a pig—which by the way I think we are violating every day on a massive scale by the way we treat pigs in factory farms. So, I'm not saying such a robot should be treated like pigs are being treated in our society today. On the contrary it should be treated with respect for their desires and awareness and their capacities to feel pain and their social nature, all of those things that we ought to take into account when we are responsible for the lives of pigs also we would have to take into account when we are responsible for the lives of robots at a similar level. But if we created robots who are at our level then I think we would have to give them really the same rights that we have. There would be no justification for saying, oh, yes but we're a biological creature and you're a robot; I don't think that has anything to do with the moral status of a being.

GLENN COHEN: One possibility is you say: A necessary condition for being a person is being a human being. So, many people are attracted to that argument and say: Only humans can be persons. All persons are humans. Now, it may be that not all humans are persons, but all persons are humans. Well, there's a problem with that and this is put most forcefully by the philosopher Peter Singer, the bioethicist Peter Singer, who says to reject a species, the possibility that a species has rights and ought to be a patient for moral consideration, the kinds of things that have moral consideration on the basis of the mere fact that they're not a member of your species, he says, is equivalent morally to rejecting giving rights or moral consideration to someone on the basis of their race. So, he says speciesism equals racism. And the argument is: Imagine that you encountered someone who is just like you in every possible respect but it turned out they actually were not a member of the human species, they were a Martian, let's say, or they were a robot and truly exactly like you. Why would you be justified in giving them less moral regard?

So, people who believe in capacity X views have to at least be open to the possibility that artificial intelligence could have the relevant capacities, albeit even though they're not human and therefore qualify for personhood. On the other side of the continuum, one of the implications is that you might have members of the human species that aren't persons and so anencephalic children, children born with very little above the brain stem in terms of their brain structure, are often given as an example. They're clearly members of the human species but their abilities to have the kinds of capacities most people think matter are relatively few and far between. So, you get into this uncomfortable position where you might be forced to recognize that some humans are non-persons and some non-humans are persons.

Now again, if you bite the bullet and say 'I'm willing to be a speciesist; being a member of the human species is either necessary or sufficient for being a person,' you avoid this problem entirely. But if not, you at least have to be open to the possibility that artificial intelligence in particular may at one point become person-like and have the rights of persons. And I think that that scares a lot of people, but in reality, to me, when you look at the course of human history and look how willy-nilly we were in declaring some people non-persons from the law, slaves in this country, for example, it seems, to me, a little humility and a little openness to this idea may not be the worst thing in the world.

  • Does AI—and, more specifically, conscious AI—deserve moral rights? In this thought exploration, evolutionary biologist Richard Dawkins, ethics and tech professor Joanna Bryson, philosopher and cognitive scientist Susan Schneider, physicist Max Tegmark, philosopher Peter Singer, and bioethicist Glenn Cohen all weigh in on the question of AI rights.
  • Given the grave tragedy of slavery throughout human history, philosophers and technologists must answer this question ahead of technological development to avoid humanity creating a slave class of conscious beings.
  • One potential safeguard against that? Regulation. Once we define the context in which AI requires rights, the simplest solution may be to not build that thing.

Your genetics influence how resilient you are to the cold

What makes some people more likely to shiver than others?

KIRILL KUDRYAVTSEV/AFP via Getty Images
Surprising Science

Some people just aren't bothered by the cold, no matter how low the temperature dips. And the reason for this may be in a person's genes.

Keep reading Show less

Harvard study finds perfect blend of fruits and vegetables to lower risk of death

Eating veggies is good for you. Now we can stop debating how much we should eat.

Credit: Pixabay
Surprising Science
  • A massive new study confirms that five servings of fruit and veggies a day can lower the risk of death.
  • The maximum benefit is found at two servings of fruit and three of veggies—anything more offers no extra benefit according to the researchers.
  • Not all fruits and veggies are equal. Leafy greens are better for you than starchy corn and potatoes.
Keep reading Show less

A landslide is imminent and so is its tsunami

An open letter predicts that a massive wall of rock is about to plunge into Barry Arm Fjord in Alaska.

Image source: Christian Zimmerman/USGS/Big Think
Surprising Science
  • A remote area visited by tourists and cruises, and home to fishing villages, is about to be visited by a devastating tsunami.
  • A wall of rock exposed by a receding glacier is about crash into the waters below.
  • Glaciers hold such areas together — and when they're gone, bad stuff can be left behind.

The Barry Glacier gives its name to Alaska's Barry Arm Fjord, and a new open letter forecasts trouble ahead.

Thanks to global warming, the glacier has been retreating, so far removing two-thirds of its support for a steep mile-long slope, or scarp, containing perhaps 500 million cubic meters of material. (Think the Hoover Dam times several hundred.) The slope has been moving slowly since 1957, but scientists say it's become an avalanche waiting to happen, maybe within the next year, and likely within 20. When it does come crashing down into the fjord, it could set in motion a frightening tsunami overwhelming the fjord's normally peaceful waters .

"It could happen anytime, but the risk just goes way up as this glacier recedes," says hydrologist Anna Liljedahl of Woods Hole, one of the signatories to the letter.

The Barry Arm Fjord

Camping on the fjord's Black Sand Beach

Image source: Matt Zimmerman

The Barry Arm Fjord is a stretch of water between the Harriman Fjord and the Port Wills Fjord, located at the northwest corner of the well-known Prince William Sound. It's a beautiful area, home to a few hundred people supporting the local fishing industry, and it's also a popular destination for tourists — its Black Sand Beach is one of Alaska's most scenic — and cruise ships.

Not Alaska’s first watery rodeo, but likely the biggest

Image source: whrc.org

There have been at least two similar events in the state's recent history, though not on such a massive scale. On July 9, 1958, an earthquake nearby caused 40 million cubic yards of rock to suddenly slide 2,000 feet down into Lituya Bay, producing a tsunami whose peak waves reportedly reached 1,720 feet in height. By the time the wall of water reached the mouth of the bay, it was still 75 feet high. At Taan Fjord in 2015, a landslide caused a tsunami that crested at 600 feet. Both of these events thankfully occurred in sparsely populated areas, so few fatalities occurred.

The Barry Arm event will be larger than either of these by far.

"This is an enormous slope — the mass that could fail weighs over a billion tonnes," said geologist Dave Petley, speaking to Earther. "The internal structure of that rock mass, which will determine whether it collapses, is very complex. At the moment we don't know enough about it to be able to forecast its future behavior."

Outside of Alaska, on the west coast of Greenland, a landslide-produced tsunami towered 300 feet high, obliterating a fishing village in its path.

What the letter predicts for Barry Arm Fjord

Moving slowly at first...

Image source: whrc.org

"The effects would be especially severe near where the landslide enters the water at the head of Barry Arm. Additionally, areas of shallow water, or low-lying land near the shore, would be in danger even further from the source. A minor failure may not produce significant impacts beyond the inner parts of the fiord, while a complete failure could be destructive throughout Barry Arm, Harriman Fiord, and parts of Port Wells. Our initial results show complex impacts further from the landslide than Barry Arm, with over 30 foot waves in some distant bays, including Whittier."

The discovery of the impeding landslide began with an observation by the sister of geologist Hig Higman of Ground Truth, an organization in Seldovia, Alaska. Artist Valisa Higman was vacationing in the area and sent her brother some photos of worrying fractures she noticed in the slope, taken while she was on a boat cruising the fjord.

Higman confirmed his sister's hunch via available satellite imagery and, digging deeper, found that between 2009 and 2015 the slope had moved 600 feet downhill, leaving a prominent scar.

Ohio State's Chunli Dai unearthed a connection between the movement and the receding of the Barry Glacier. Comparison of the Barry Arm slope with other similar areas, combined with computer modeling of the possible resulting tsunamis, led to the publication of the group's letter.

While the full group of signatories from 14 organizations and institutions has only been working on the situation for a month, the implications were immediately clear. The signers include experts from Ohio State University, the University of Southern California, and the Anchorage and Fairbanks campuses of the University of Alaska.

Once informed of the open letter's contents, the Alaska's Department of Natural Resources immediately released a warning that "an increasingly likely landslide could generate a wave with devastating effects on fishermen and recreationalists."

How do you prepare for something like this?

Image source: whrc.org

The obvious question is what can be done to prepare for the landslide and tsunami? For one thing, there's more to understand about the upcoming event, and the researchers lay out their plan in the letter:

"To inform and refine hazard mitigation efforts, we would like to pursue several lines of investigation: Detect changes in the slope that might forewarn of a landslide, better understand what could trigger a landslide, and refine tsunami model projections. By mapping the landslide and nearby terrain, both above and below sea level, we can more accurately determine the basic physical dimensions of the landslide. This can be paired with GPS and seismic measurements made over time to see how the slope responds to changes in the glacier and to events like rainstorms and earthquakes. Field and satellite data can support near-real time hazard monitoring, while computer models of landslide and tsunami scenarios can help identify specific places that are most at risk."

In the letter, the authors reached out to those living in and visiting the area, asking, "What specific questions are most important to you?" and "What could be done to reduce the danger to people who want to visit or work in Barry Arm?" They also invited locals to let them know about any changes, including even small rock-falls and landslides.

Cephalopod aces 'marshmallow test' designed for eager children

The famous cognition test was reworked for cuttlefish. They did better than expected.

Credit: Hans Hillewaert via Wikicommons
Surprising Science
  • Scientists recently ran the Stanford marshmallow experiment on cuttlefish and found they were pretty good at it.
  • The test subjects could wait up to two minutes for a better tasting treat.
  • The study suggests cuttlefish are smarter than you think but isn't the final word on how bright they are.
Keep reading Show less
Quantcast