Can AI Develop Empathy?

AI theorist Eliezer Yudkowsky once wrote, "The AI neither hates you nor loves you, but you are made out of atoms that it can use for something else."

 

Robot Pepper takes a photo with a human.
World's first emotion-recognizing robot Pepper takes a photo with an admirer.

While we stand at the precipice of the robot revolution, tech and science scions are weighing in on what impact it will likely have on the human race. Most foresee two ultimate scenarios, the robot apocalypse made palpable by the Terminator movies, and robots becoming our servants and companions, much like the Asimovian world of iRobot. Stephen Hawking publicly warned of the first scenario, and with good reason. Without a moral backbone, AI could go horribly wrong.


Microsoft got a glimpse recently when their tweeting AI program, identified as a 19-year-old girl named Tay, quickly turned into something of a Nazi and a sexpot. Since Tay was programmed to learn by interacting with others on Twitter, the tech giant claims that it was quickly engaged with a cluster of internet trolls who ultimately turned the program in the wrong direction. The problem of AI going haywire has become so worrisome that now a division of Google known as DeepMind is working with researchers at Oxford to develop a “kill switch,” lest AI becomes a serious threat.

Basically, a computer with human-level intelligence based solely on its own optimization may find that humans are in the way, either by trying to shut it down, or by keeping desirous resources from it. AI theorist Eliezer Yudkowsky once wrote, "The AI neither hates you nor loves you, but you are made out of atoms that it can use for something else."

It’s not just Hollywood but some of the world’s most brilliant minds who fear a robot apocalypse.

One way to make sure robots and AI stay on the helpful side is to imbue them with empathy. That’s at least according to expert Murray Shanahan. He is a professor of cognitive robotics at Imperial College London. But is it actually possible to develop a feeling robot? Not only is it possible, Shanahan thinks it’s necessary.

That’s why he, Elon Musk, and Stephen Hawking together penned an open letter to the Centre for the Study of Existential Risk (CSER), promoting more research in AI to discover “potential pitfalls.” In the letter, Musk says that AI could become more dangerous than nuclear weapons, while Hawkins says it could spell the end of the human race. Shanahan suggests creating artificial general intelligence (AGI) with a human-like psychological framework, or even modeling AI after our own neurological makeup.

There is time to consider direction. Experts believe AI will reach human-like intelligence anywhere from 15 to 100 years from now. 2100 seems like a good year to hang your hat on, at least according to Shanahan. Besides a dangerous AI with no moral capacity, there is fear that the current economic, social, and political climate may lead to developing AGI that is purposely dangerous. In fact, unbridled capitalism has been implicated outright. "Capitalist forces will drive incentive to produce ruthless maximization processes. With this there is the temptation to develop risky things," Shanahan said. Governments or companies could use AGI to rig markets or elections, even develop military tech head and shoulders above anything currently being developed. Other governments would have to respond, creating an AI arms race.

AI could bring a whole new dimension to military capability.

Since there is no data to suggest AI would go bad, Shanahan and other experts say it isn’t necessary to ban this technology, but to put in the necessary safeguards in place to make sure it stays friendly and peaceful. To do so, Shanahan says there are certain capabilities the machine will need to have, such as the ability to form relationships, to recognize and understand the emotions of others, and even to feel empathy itself. One way is to mimic the human brain artificially. Since we haven’t mapped it entirely, that doesn’t seem likely in the short-term. Scientists at the Human Connectome Project (HCP) are working on mapping the brain. Some say that it may be possible to create algorithms and computational structures which act like a human brain, at least in theory.

Robots who can recognize and respond to human emotion are already on the market. Nao is one. This robot, just two feet tall, is put out by Aldebaran Robotics. Nao is equipped with facial recognition software. It can make eye contact and responds when you talk to it. So far, Nao has been successfully used in classrooms to help autistic children. A new model by the same Japanese firm named Pepper can recognize not only words, but facial expressions and body language, and responds appropriately in kind. But here, the robot itself isn’t feeling. For that, it would need self-awareness which requires the ability to feel what others are feeling and to think about those feelings.

We are getting close at making the outside look human. But the inside? That’s a lot more complicated.

To have real empathy is to recognize the emotions in others that you yourself have felt. For such emotions to take root, robots will have to have experiences like growing up, and succeeding and failing. They will need to feel emotions like attachment, desire, accomplishment, love, anger, worry, fear, perhaps even jealousy. Robots will need to take part in meta-cognition too, which is thinking about one’s own thinking and emotions. Science fiction writer Philip K. Dick foresaw this. In his novels, robots were simply imbued with artificial memories. A recent film, Ex Machina brings up another question, if a robot were able to show authentic emotion, would you believe it? It could be difficult discerning a robot who was really good at responding appropriately, to one that actually holds the same feelings as you.

And then say all of these problems are solved and questions favorably answered. Will robots be are equals or subordinates? And should they have their emotions controlled by humans? Or is this a form of organic slavery, meaning they should be able to think and feel how they wish? There are a lot of difficult issues to be worked out. At this point, our social, economic, and political structures may not be ready for such a disruptive change. Even so, Shanahan believes we have time now to begin wading through these thorny matters, putting some safeguards in place, and setting aside resources to be sure that robots become humanity’s closest friends, instead of our ultimate adversaries.

Learn more about Pepper the world’s first emotion-reading robot here:

What does kindness look like? It wears a mask.

Northwell Health CEO Michael Dowling has an important favor to ask of the American people.

Sponsored by Northwell Health
  • Michael Dowling is president and CEO of Northwell Health, the largest health care system in New York state. In this PSA, speaking as someone whose company has seen more COVID-19 patients than any other in the country, Dowling implores Americans to wear masks—not only for their own health, but for the health of those around them.
  • The CDC reports that there have been close to 7.9 million cases of coronavirus reported in the United States since January. Around 216,000 people have died from the virus so far with hundreds more added to the tally every day. Several labs around the world are working on solutions, but there is currently no vaccine for COVID-19.
  • The most basic thing that everyone can do to help slow the spread is to practice social distancing, wash your hands, and to wear a mask. The CDC recommends that everyone ages two and up wear a mask that is two or more layers of material and that covers the nose, mouth, and chin. Gaiters and face shields have been shown to be less effective at blocking droplets. Homemade face coverings are acceptable, but wearers should make sure they are constructed out of the proper materials and that they are washed between uses. Wearing a mask is the most important thing you can do to save lives in your community.
Keep reading Show less

Science confirms: Earth has more than one 'moon'

Two massive clouds of dust in orbit around the Earth have been discussed for years and finally proven to exist.

J. Sliz-Balogh, A. Barta and G. Horvath
Surprising Science
  • Hungarian astronomers have proven the existence of two "pseudo-satellites" in orbit around the earth.
  • These dust clouds were first discovered in the sixties, but are so difficult to spot that scientists have debated their existence since then.
  • The findings may be used to decide where to put satellites in the future and will have to be considered when interplanetary space missions are undertaken.
Keep reading Show less

Millennials reconsidering finances and future under COVID-19

A new survey found that 27 percent of millennials are saving more money due to the pandemic, but most can't stay within their budgets.

Personal Growth
  • Millennials have been labeled the "unluckiest generation in U.S. history" after the one-two financial punch of the Great Recession and the pandemic shutdowns.
  • A recent survey found that about a third of millennials felt financially unprepared for the pandemic and have begun saving.
  • To achieve financial freedom, millennials will need to take control of their finances and reinterpret their relationship with the economy.
  • Keep reading Show less

    6 easy ways to transition to a plant-based diet

    Your health and the health of the planet are not indistinguishable.

    Credit: sonyakamoz / Adobe Stock
    Personal Growth
    • Transitioning to a plant-based diet could help reduce obesity, cardiovascular disease, and type 2 diabetes.
    • Humans are destroying entire ecosystems to perpetuate destructive food habits.
    • Understanding how to properly transition to a plant-based diet is important for success.
    Keep reading Show less
    Culture & Religion

    Karma doesn't work how most people think it does

    Eastern traditions have complex views on how karma affects your life.

    Scroll down to load more…
    Quantcast