Say What? Chatbots Can Create Their Own Non-Human Language to Communicate

Facebook researchers have found that dialog agents being trained to negotiate will create their own non-human language to be more effective. What does this mean for the future of language?

Let's just hope the chatbots are gossiping behind our back.


You know that slight anxiousness you feel when two people are negotiating in a language you don't understand? Well, it turns out that chatbots are able to create their own non-human language to communicate back-and-forth. This was reported by researchers at Facebook Artificial Intelligence Research (FAIR), who were developing "dialog agents" with the newfound capability of negotiation. 

In order for bots to communicate more efficiently with other bots, they learned to create their own simple language.

"To go beyond simply trying to imitate people, the FAIR researchers instead allowed the model to achieve the goals of the negotiation. To train the model to achieve its goals, the researchers had the model practice thousands of negotiations against itself, and used reinforcement learning to reward the model when it achieved a good outcome. To prevent the algorithm from developing its own language, it was simultaneously trained to produce humanlike language."-Deal or no deal? Training AI bots to negotiate 

Is This a Big Deal?

On its face, it seems logical that chatbots would create their own language. If one of the major reasons why human language evolved is so we could more effectively convey our desires, it makes sense that bots aiming for efficiency would shortcut human imitation. At the same time, the concept of a non-human language further erodes the uniqueness we may feel as a human (given how central language has been to society's development).

The findings by the Facebook researchers falls on the heels of research from OpenAI, which in March 2017 reported the ability for bots to create their own language. Similar to the report by FAIR, the researchers found that bots were able to create their own language when going through reinforcement learning. 

The researchers from OpenAI set out to "teach AI agents to create language by dropping them into a set of simple worlds, giving them the ability to communicate, and then giving them goals that can be best achieved by communicating with other agents. If they achieve a goal, then they get rewarded. We train them using reinforcement learning and, due to careful experiment design, they develop a shared language to help them achieve their goals."

In the above video, one-word phrases were created by two agents to form simple tasks while the more challenging tasks by three agents led to multiple-word sentences. The rewarding of concise communication led to the development of a larger vocabulary. The researchers noted, however, that the bots had a tendency to turn whole sentences into one-word utterances (which wouldn't be desirable as an interpretable language).

Reinforcement Learning and the Creation of a Bot Langauge 

Reinforcement learning is a form of trial and error, where the bots are keeping track of what receives a reward and what doesn't. If the bot, or the "dialog agent" in the case if Facebook's research, receives a reward then it learns to continue that behavior. Agents are learning to modify their communication output to maximize the reward. As pointed out in FAIR's report, "[d]uring reinforcement learning, the agent attempts to improve its parameters from conversations with another agent."

The creation of the bot language came about when it was more efficient and more rewarding in bot-to-bot communication to have a shared language (as opposed to mimicking our human language). 

"[T]he researchers found that updating the parameters of both agents led to divergence from human language as the agents developed their own language for negotiating."-Deal or no deal? Training AI bots to negotiate 

 

So is it bad that the agents diverged from human language? It is if the goal is always to mimic human language and also retaining the ability to decipher bot-to-bot communication. Then again, we may soon have bot translators.

===

Want to connect? Reach out @TechEthicist and on Facebook. Exploring the ethical, legal, and emotional impact of social media & tech. Co-host of the upcoming show, Funny as Tech.

3D printing might save your life one day. It's transforming medicine and health care.

What can 3D printing do for medicine? The "sky is the limit," says Northwell Health researcher Dr. Todd Goldstein.

Northwell Health
Sponsored by Northwell Health
  • Medical professionals are currently using 3D printers to create prosthetics and patient-specific organ models that doctors can use to prepare for surgery.
  • Eventually, scientists hope to print patient-specific organs that can be transplanted safely into the human body.
  • Northwell Health, New York State's largest health care provider, is pioneering 3D printing in medicine in three key ways.
Keep reading Show less
Big Think Edge
  • "I consider that a man's brain originally is like a little empty attic, and you have to stock it with such furniture as you choose," Sherlock Holmes famously remarked.
  • In this lesson, Maria Konnikova, author of Mastermind: How to think like Sherlock Holmes, teaches you how to optimize memory, Holmes style.
  • The goal is to expand one's limited "brain attic," so that what used to be a small space can suddenly become much larger because we are using the space more efficiently.

Active ingredient in Roundup found in 95% of studied beers and wines

The controversial herbicide is everywhere, apparently.

(MsMaria/Shutterstock)
Surprising Science
  • U.S. PIRG tested 20 beers and wines, including organics, and found Roundup's active ingredient in almost all of them.
  • A jury on August 2018 awarded a non-Hodgkin's lymphoma victim $289 million in Roundup damages.
  • Bayer/Monsanto says Roundup is totally safe. Others disagree.
Keep reading Show less
Big Think Edge
  • Our ability to behave rationally depends not just on our ability to use the facts, but on our ability to give those facts meaning. To be rational, we need both facts and feelings. We need to be subjective.
  • In this lesson, risk communication expert David Ropeik teaches you how human rationality influences our perception of risk.
  • By the end of it, you'll understand the pitfalls of your subjective risk perception system so that you can avoid these traps in the future.