Skip to content
Who's in the Video
Prof. Yuval Noah Harari is a historian, philosopher, and the bestselling author of Sapiens: A Brief History of Humankind, Homo Deus: A Brief History of Tomorrow, 21 Lessons for the[…]
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

What are some arguments for and against a future where humans only have relationships with AI, and not with humans? AI is rapidly becoming better at understanding human feelings and emotions and developing intimate relationships with us, says historian and the best-selling author of ‘Sapiens’ @YuvalNoahHarari, in conversation with journalist Andrew Ross Sorkin.

Often our peers, friends, and family aren’t able to understand or hold space for our feelings, partly because they are so preoccupied with their own. But AI is able to dedicate immense amounts of time to analyze and decipher our moods. Rather than the cold, mechanical, unfeeling robots depicted in science fiction, the AI would be nearly the complete opposite. 

This presents a future where AI will be so good at understanding us and reacting in a way calibrated to an individual’s personality at this particular moment that we may become disappointed with our fellow humans who don’t have this same capacity. But this invites a host of important questions to ask now: Will AI develop their own emotions? Will we start to treat them as conscious beings? Will we grant them legal status? Will we allow them to earn money? Invest it? Make billions? Lobby for politicians? Become our next president?

ANDREW ROSS SORKIN: - What are some arguments for and against a future in which humans no longer have relationships with other humans, and only have relationships with AI?

YUVAL NOAH HARARI: - Hmm. One thing to say about it is that AI is becoming better and better at understanding our feelings, our emotions, and therefore, of developing relationships and intimate relationships with us. Because there is a deep yearning in human beings to be understood. We always want people to understand us, to understand how we feel. We want my husband, my parents, my teachers, my boss, to understand how I feel. And very often we are disappointed. They don't understand how I, how I feel, partly because they are too preoccupied with their own feelings to care about my feelings. AI's will not have this problem. They don't have any feelings of their own, and they can be a hundred percent focused on deciphering, on analyzing your feelings. So, you know, in these, all these science fiction movies in which the robots are extremely cold and mechanical, and they can't understand the most basic human emotion, it's the complete opposite. Part of the issue we are facing, is that they will be so good at understanding human emotions, and reacting in a way which is exactly calibrated to your personality at this particular moment, that we might become exasperated with the human beings who don't have this capacity to understand our emotions, and to react in such a calibrated way. There is a very big question which we didn't deal with, and it's a long question of whether AI's will develop emotions, feelings of their own, whether they become conscious or not. At present, we don't see any sign of it. But even if AI's don't develop any feelings of their own, once we become emotionally attached to them, it is likely that we would start treating them as conscious entities, as sentient beings, and we'll confirm them. The legal status of persons. You know, in the US there is actually a legal path already open for that. Corporations, according to U.S. law are legal persons. They have rights, they have freedom of speech for instance. Now, you can incorporate an AI. When you incorporate a corporation like Google, or Facebook or whatever, up and until today, this was to some extent just make believe, because all the decisions of the corporations had to be made by human beings, by the executives, the lawyers, the accountants. What happens if you incorporate an AI? It's now a legal person, and it can make decisions by itself, it doesn't need any human team to run it. So you start having legal persons, let's say in the U.S., which are not human, and in many ways are more intelligent than us. And they can start making money, for instance, going on TaskRabbit, and offering their services to various things like writing texts. So they earn money. And then they go to the, to the market. They go to Wall Street, and they invest that money. And because they're so intelligent, maybe they make billions. And so you have a situation in which perhaps the richest person in the U.S. is not a human being. And part of their rights is that they have under freedom of speech, the right to make political contributions. So this AI person-

- Now lobbyist.

- Can contribute billions of dollars to some candidate in exchange for getting more rights to AI's.

- So there'll be an AI president. I'm hoping you're gonna leave me, leave all of us, maybe on a high note here with something optimistic. Here's the question: "Humans are not always able "to think from other perspectives. "Is AI able to think from multiple perspectives?" I think the answer is yes.

- Yes.

- But do you think that AI will actually help us think this way?

- That's one of the positive scenarios about AI, that they will help us understand ourselves better, that their immense power will be used not to manipulate us, but to help us. And then we have historical precedence for that. Like we have relationships with humans, like doctors, like lawyers, accountants, therapists, that know a lot of things about us. Some of like our most private information is held by these people, and they have a fiduciary duty to use our private information, and their expertise to help us. And this is not, you don't need to invent the wheel. This is already there. And it's obvious that if they use it to manipulate us, or if they sell it to a third party to manipulate us, this is basically against the law. They can go to prison for that. And we should have the same thing with AI. And we talked a lot about the dangers of AI, but obviously AI has enormous positive potential. Otherwise, we would not be developing it. It could provide us with the best healthcare in history. It could prevent most car accidents. It can also, you know, you can have armies of AI doctors, and teachers, and therapists who help us, including help us understand our own humanity, our own relationships, our own emotions better. This can happen if we make the right decisions in the next few years. Now, I would end maybe by saying that, again, it's not that we lack the power. At the present moment, what we lack is the understanding and the attention. This is potentially the biggest technological revolution in history, and it's moving extremely fast. That's the key problem. It's just moving extremely fast. If you think about the U.S. elections, the coming U.S. elections, so whoever wins the elections over the next four years, some of the most important decisions they will have to make, would be about AI's, and regulating AIs and AI safety and so forth. And it's not one of the main issues in the presidential debates. It's not even clear what is the difference if there is one, between Republicans and Democrats on AI. So on specific issues, we start seeing differences when it comes to issues of freedom of speech and regulation and so forth. But about the broader question, it's not clear at all. And again, the biggest danger of all, is that we will just rush forward without thinking, and without developing the mechanisms to slow down or to stop if necessary. You know, if you think about it like a car, so when they taught me how to drive a car, the first thing I learned is how to use the brakes. That's the first thing I think they teach most people. Only after you know how to use the brakes, they teach you how to use the fuel pedal, the accelerator. And it's the same when you learn how to ski. I never learned how to ski, but people who had told me, the first thing they tell you-

- Teach how to stop.

- is how to, how to stop. How to fall. It's a bad idea to first teach you how to kind of, "Okay, go faster." And then when you are down the slope, they start shouting, "Okay, this is how you stop." And this is what we're doing with AI. Like you have this chorus of people in places like Silicon Valley. "Let's go as fast as we can. "If there is a problem down the road, "we'll figure it out how to stop." That's very, very dangerous.

- You are writing this whole book about AI and technology, and you do not carry a smartphone. Is this true?

- I have a kind of, you know, emergency smartphone, because, for various services.

- So how does your whole life work?

- Because, but I don't carry it with me.

- I'm told you do not carry a phone, you don't have an email, the whole thing.

- No, I have email.

- So you have email.

- I don't have, I try to use technology, but not to be used by it. And part of the answer is that I have a team who is carrying the smartphone and doing all that for me. So, so it's not so fair to say that I don't, I don't have it. But I think on a bigger issue, what we can say, it's, it's a bit like with food, that a hundred years ago, food was scarce. So people ate whatever they could, and if they found something full of sugar and fat, they ate as much of it as possible because it gave you a lot of energy. And now we are flooded by enormous amounts of food and junk food, which is artificially pumped full of fat and sugar, and is creating immense health problems. And most people have realized that more food is not always good for me, and that I need to have some kind of diet. And it's exactly the same with information. We need an information diet. That previously information was scarce, so we consumed whatever we could find. Now we are flooded by enormous amounts of information, and much of it is junk information, which is artificially filled with hatred, and greed, and fear. And we basically need to go on an information diet, and consume less information, and be far more mindful about what we put inside. That information is the food for the mind. And you know, you feed your mind with unhealthy information, you'll have a sick mind. It's as simple as that.

- Well, we want to thank Yuval. Add "Nexus" to your information diet, because it's an important document about our future and our world. And I wanna thank you for this fascinating conversation.

- Thank you.

- Thank you. And thank you for all your questions.


Related