Why creating an AI that has free will would be a huge mistake
Giving human rights to a being with unlimited knowledge? Probably not a good idea.
Joanna Bryson is a Reader (tenured Associate Professor) at the University of Bath, and an affiliate of Princeton's Center for Information Technology Policy (CITP). She has broad academic interests in the structure and utility of intelligence, both natural and artificial. Venues for her research range from Reddit to Science. She is best known for her work in systems AI and AI ethics, both of which she began during her Ph.D. in the 1990s, but she and her colleagues publish broadly, in biology, anthropology, sociology, philosophy, cognitive science, and politics. Current projects include “The Limits of Transparency for Humanoid Robotics” funded by AXA Research, and “Public Goods and Artificial Intelligence” (with Alin Coman of Princeton University’s Department of Psychology and Mark Riedl of Georgia Tech) funded by Princeton’s University Center for Human Values. Other current research includes understanding the causality behind the correlation between wealth inequality and political polarization, generating transparency for AI systems, and research on machine prejudice deriving from human semantics. She holds degrees in Psychology from Chicago and Edinburgh, and in Artificial Intelligence from Edinburgh and MIT. At Bath, she founded the Intelligent Systems research group (one of four in the Department of Computer Science) and heads their Artificial Models of Natural Intelligence.
Joanna Bryson: First of all there’s the whole question about why is it that we in the first place assume that we have obligations towards robots?
So we think that if something is intelligent, then that’s their special source, that’s why we have moral obligations. And why do we think that?
Because most of our moral obligations, the most important thing to us is each other.
So basically morality and ethics are the way that we maintain human society, including by doing things like keeping the environment okay, you know, making it so we can live.
So, one of the ways we characterize ourselves is as intelligent, and so when we then see something else and say, “Oh it’s more intelligent, well then maybe it needs even more protection.”
In AI we call that kind of reasoning heuristic reasoning: it’s a good guess that will probably get you pretty far, but it isn’t necessarily true.
I mean, again, how you define the term “intelligent” will vary. If you mean by “intelligent” a moral agent, you know, something that’s responsible for its actions, well then, of course, intelligence implies moral agency.
When will we know for sure that we need to worry about robots? Well, there’s a lot of questions there, but consciousness is another one of those words. The word I like to use is “moral patient”. It’s a technical term that the philosophers came up with, and it means, exactly, something that we are obliged to take care of.
So now we can have this conversation.
If you just mean “conscious means moral patient”, then it’s no great assumption to say “well then, if it’s conscious then we need to take care of it”. But it’s way more cool if you can say, “Does consciousness necessitate moral patiency?” And then we can sit down and say, “well, it depends what you mean by consciousness.” People use consciousness to mean a lot of different things.
So one of the things that we did last year, which was pretty cool, the headlines, because we were replicating some psychology stuff about implicit bias—actually the best one is something like “Scientists Show That A.I. Is Sexist and Racist, and It’s Our Fault,” which that’s pretty accurate, because it really is about picking things up from our society.
Anyway, the point was, so here is an AI system that is so human-like that it’s picked up our prejudices and whatever… and it’s just vectors! It’s not an ape. It’s not going to take over the world. It’s not going to do anything, it’s just a representation; it’s like a photograph.
We can’t trust our intuitions about these things.
We give things rights because that’s the best way we can find to handle very complicated situations. And the things that we give rights are basically people.
I mean some people argue about animals, but technically, and again this depends on whose technical definition you use, but technically rights are usually things that come with responsibilities and that you can defend in a court of law.
So normally we talk about animal welfare and we talk about human rights, but with artificial intelligence you can even imagine itself knowing its rights and defending itself in the court of law. But the question is, why would we need to protect the artificial intelligence with rights? Why is that the best way to protect it?
So with humans it’s because we’re fragile, it’s because there’s only one of us. And I actually think—this is horribly reductionist, but I actually think—it’s just the best way that we’ve found to be able to cooperate. It’s sort of an acknowledgment of the fact that we’re all basically the same thing, the same stuff, and we had to come up with, the technical term again is equilibrium, we had to come up with some way to share the planet, and we haven't managed to do it completely fairly (like ‘everybody gets the same amount of space’), but actually we all want to be recognized for our achievements so even completely fair isn’t completely fair, if that makes sense.
And I don’t mean to be facetious there, it really is true that you can’t make all the things you would like out of fairness be true at once.
That’s a fact about the world; it’s a fact about the way we define fairness.
So, given how hard it is to be fair, why should we build AI that needs us to be fair to it?
So what I’m trying to do is just make the problem simpler and focus us on the thing that we can’t help, which is the human condition.
And I’m recommending that if you specify something, if you say okay this is when you really need rights in this context, okay once we’ve established that, don’t build that, okay?
A lot of people this rubs them the wrong way like its because they’ve watched Blade Runner or AI the movie or something like this.
In a lot of these movies we’re not really talking about AI, we’re not talking about something designed from the ground up, we’re talking basically about clones.
And clones are a different situation. If you have something that’s exactly like a person, however it was made, then okay, then it’s exactly like a person and it needs that kind of protection.
But even biological clones, even if you just want to clone yourself, at least in the European Union, that’s illegal. I’m not sure about in America. I think it’s illegal in America too.
But people think it’s unethical to create human clones partly because they don’t want to burden someone with the knowledge that they’re supposed to be someone else, that there was some other person that chose them to be that person. I don’t know if we’ll be able to stick to that, but I would say that AI clones fall into the same category.
If you’re really going to make something and then say, “Hey, congratulations, you’re me and you have to do what I say,” I wouldn’t want myself to tell me what to do, if that makes sense, if there were two of me!
I think we’d like to be able to both be equals, and so you don’t want to have—an artifact is something you’ve deliberately built and that you’re going to own.
If you have something that’s sort of a humanoid servant that you own, then the word for that is slave.
And so I was trying to establish that look, we are going to own anything we build, and so therefore it would be wrong to make it a person, because we’ve already established that slavery of people is wrong and bad and illegal.
And so it never occurred to me that people would take that to mean that “the robots will be people that we just treat really badly. “
It’s like no, that’s exactly the opposite!
So, I already mentioned that if somebody did manage to clone people somehow, which I don’t believe this is ever going to work but people do talk about it and people spending tens of millions of dollars on it, “whole brain uploading”. So I don’t believe it’s possible. I don’t think it’s actually computationally tractable, but if that were to happen then I would be there saying, “Yes this is a person”. But how can we stop that the same way we stop human cloning, which is just to say, “Don’t do that”?
And particularly with AI my point is that it shouldn’t be a commercial product.
So if somebody does this in their basement or something well then we have a few exceptions, but I’m much more concerned about people mass producing such things.
Joanna Bryson thinks that people are confusing artificial intelligence with human clones, mostly due to Hollywood movies like Blade Runner and Steven Spielberg's A.I., both of which feature very humanoid beings. Take away the somewhat cuddly ideas the movies have given us about artificial intelligence and you have this: hyper-smart machines with absolutely no limit to their knowledge. She posits that giving artificial intelligence the same rights a human has could result in pretty dire consequences... because AI has already proven that it can pick up negative human characteristics if those characteristics are in the data. Therefore, it's not crazy at all to think that AI could scan all of Twitter in one afternoon and pick up all the negativity we've unloaded there. If it's already proven it's not only capable of making the wrong decision but eventually will make the wrong decision when it comes to data mining and implementation, why even give it the same powers as us in the first place?
Get smarter, faster. Subscribe to our daily newsletter.
What is human dignity? Here's a primer, told through 200 years of great essays, lectures, and novels.
- Human dignity means that each of our lives have an unimpeachable value simply because we are human, and therefore we are deserving of a baseline level of respect.
- That baseline requires more than the absence of violence, discrimination, and authoritarianism. It means giving individuals the freedom to pursue their own happiness and purpose.
- We look at incredible writings from the last 200 years that illustrate the push for human dignity in regards to slavery, equality, communism, free speech and education.
The inherent worth of all human beings<p>Human dignity is the inherent worth of each individual human being. Recognizing human dignity means respecting human beings' special value—value that sets us apart from other animals; value that is intrinsic and cannot be lost.</p> <p>Liberalism—the broad political philosophy that organizes society around liberty, justice, and equality—is rooted in the idea of human dignity. Liberalism assumes each of our lives, plans, and preferences have some unimpeachable value, not because of any objective evaluation or contribution to a greater good, but simply because they belong to a human being. We are human, and therefore deserving of a baseline level of respect. </p> <p>Because so many of us take human dignity for granted—just a fact of our humanness—it's usually only when someone's dignity is ignored or violated that we feel compelled to talk about it. </p> <p>But human dignity means more than the absence of violence, discrimination, and authoritarianism. It means giving individuals the freedom to pursue their own happiness and purpose—a freedom that can be hampered by restrictive social institutions or the tyranny of the majority. The liberal ideal of the good society is not just peaceful but also pluralistic: It is a society in which we respect others' right to think and live differently than we do.</p>
From the 19th century to today<p>With <a href="https://books.google.com/ngrams/graph?year_start=1800&year_end=2019&content=human+dignity&corpus=26&smoothing=3&direct_url=t1%3B%2Chuman%20dignity%3B%2Cc0" target="_blank" rel="noopener noreferrer">Google Books Ngram Viewer</a>, we can chart mentions of human dignity from 1800-2019.</p><img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNDg0ODU0My9vcmlnaW4ucG5nIiwiZXhwaXJlc19hdCI6MTY1MTUwMzE4MX0.bu0D_0uQuyNLyJjfRESNhu7twkJ5nxu8pQtfa1w3hZs/img.png?width=980" id="7ef38" class="rm-shortcode" data-rm-shortcode-id="9974c7bef3812fcb36858f325889e3c6" data-rm-shortcode-name="rebelmouse-image" />
American novelist, writer, playwright, poet, essayist and civil rights activist James Baldwin at his home in Saint-Paul-de-Vence, southern France, on November 6, 1979.
Credit: Ralph Gatti/AFP via Getty Images
The future of dignity<p>Around the world, people are still working toward the full and equal recognition of human dignity. Every year, new speeches and writings help us understand what dignity is—not only what it looks like when dignity is violated but also what it looks like when dignity is honored. In his posthumous essay, Congressman Lewis wrote, "When historians pick up their pens to write the story of the 21st century, let them say that it was your generation who laid down the heavy burdens of hate at last and that peace finally triumphed over violence, aggression and war."</p> <p>The more we talk about human dignity, the better we understand it. And the sooner we can make progress toward a shared vision of peace, freedom, and mutual respect for all. </p>
A new study shows our planet is much closer to the supermassive black hole at the galaxy's center than previously estimated.
Arrows on this map show position and velocity data for the 224 objects utilized to model the Milky Way Galaxy. The solid black lines point to the positions of the spiral arms of the Galaxy. Colors reflect groups of objects that are part of the same arm, while the background is a simulation image.
Apple sold its first iPod in 2001, and six years later it introduced the iPhone, which ushered in a new era of personal technology.
With just a few strategical tweaks, the Nazis could have won one of World War II's most decisive battles.
- The Battle of Britain is widely recognized as one of the most significant battles that occurred during World War II. It marked the first major victory of the Allied forces and shifted the tide of the war.
- Historians, however, have long debated the deciding factor in the British victory and German defeat.
- A new mathematical model took into account numerous alternative tactics that the German's could have made and found that just two tweaks stood between them and victory over Britain.
Two strategic blunders<p>Now, historians and mathematicians from York St. John University have collaborated to produce <a href="http://www-users.york.ac.uk/~nm15/bootstrapBoB%20AAMS.docx" target="_blank">a statistical model (docx download)</a> capable of calculating what the likely outcomes of the Battle of Britain would have been had the circumstances been different. </p><p>Would the German war effort have fared better had they not bombed Britain at all? What if Hitler had begun his bombing campaign earlier, even by just a few weeks? What if they had focused their targets on RAF airfields for the entire course of the battle? Using a statistical technique called weighted bootstrapping, the researchers studied these and other alternatives.</p><p>"The weighted bootstrap technique allowed us to model alternative campaigns in which the Luftwaffe prolongs or contracts the different phases of the battle and varies its targets," said co-author Dr. Jaime Wood in a <a href="https://www.york.ac.uk/news-and-events/news/2020/research/mathematicians-battle-britain-what-if-scenarios/" target="_blank">statement</a>. Based on the different strategic decisions that the German forces could have made, the researchers' model enabled them to predict the likelihood that the events of a given day of fighting would or would not occur.</p><p>"The Luftwaffe would only have been able to make the necessary bases in France available to launch an air attack on Britain in June at the earliest, so our alternative campaign brings forward the air campaign by three weeks," continued Wood. "We tested the impact of this and the other counterfactuals by varying the probabilities with which we choose individual days."</p><p>Ultimately, two strategic tweaks shifted the odds significantly towards the Germans' favor. Had the German forces started their campaign earlier in the year and had they consistently targeted RAF airfields, an Allied victory would have been extremely unlikely.</p><p>Say the odds of a British victory in the real-world Battle of Britain stood at 50-50 (there's no real way of knowing what the actual odds are, so we'll just have to select an arbitrary figure). If this were the case, changing the start date of the campaign and focusing only on airfields would have reduced British chances at victory to just 10 percent. Even if a British victory stood at 98 percent, these changes would have cut them down to just 34 percent.</p>
A tool for understanding history<p>This technique, said co-author Niall Mackay, "demonstrates just how finely-balanced the outcomes of some of the biggest moments of history were. Even when we use the actual days' events of the battle, make a small change of timing or emphasis to the arrangement of those days and things might have turned out very differently."</p><p>The researchers also claimed that their technique could be applied to other uncertain historical events. "Weighted bootstrapping can provide a natural and intuitive tool for historians to investigate unrealized possibilities, informing historical controversies and debates," said Mackay.</p><p>Using this technique, researchers can evaluate other what-ifs and gain insight into how differently influential events could have turned out if only the slightest things had changed. For now, at least, we can all be thankful that Hitler underestimated Britain's grit.</p>
A biologist-reporter investigates his fungal namesake.
The unmatched biologist-reporter Tomasz Sitarz interviews his fungal namesake, maślak sitarz – known in English as the Jersey cow mushroom.