Skip to content
Technology & Innovation

Say What? Chatbots Can Create Their Own Non-Human Language to Communicate

Facebook researchers have found that dialog agents being trained to negotiate will create their own non-human language to be more effective. What does this mean for the future of language?
Theodore falls in love with Samantha, his operating system. From the movie Her (2013)

Let’s just hope the chatbots are gossiping behind our back.


You know that slight anxiousness you feel when two people are negotiating in a language you don’t understand? Well, it turns out that chatbots are able to create their own non-human language to communicate back-and-forth. This was reported by researchers at Facebook Artificial Intelligence Research (FAIR), who were developing “dialog agents” with the newfound capability of negotiation. 

In order for bots to communicate more efficiently with other bots, they learned to create their own simple language.

“To go beyond simply trying to imitate people, the FAIR researchers instead allowed the model to achieve the goals of the negotiation. To train the model to achieve its goals, the researchers had the model practice thousands of negotiations against itself, and used reinforcement learning to reward the model when it achieved a good outcome. To prevent the algorithm from developing its own language, it was simultaneously trained to produce humanlike language.Deal or no deal? Training AI bots to negotiate 

Is This a Big Deal?

On its face, it seems logical that chatbots would create their own language. If one of the major reasons why human language evolved is so we could more effectively convey our desires, it makes sense that bots aiming for efficiency would shortcut human imitation. At the same time, the concept of a non-human language further erodes the uniqueness we may feel as a human (given how central language has been to society’s development).

The findings by the Facebook researchers falls on the heels of research from OpenAI, which in March 2017 reported the ability for bots to create their own language. Similar to the report by FAIR, the researchers found that bots were able to create their own language when going through reinforcement learning. 

The researchers from OpenAI set out to “teach AI agents to create language by dropping them into a set of simple worlds, giving them the ability to communicate, and then giving them goals that can be best achieved by communicating with other agents. If they achieve a goal, then they get rewarded. We train them using reinforcement learning and, due to careful experiment design, they develop a shared language to help them achieve their goals.”

In the above video, one-word phrases were created by two agents to form simple tasks while the more challenging tasks by three agents led to multiple-word sentences. The rewarding of concise communication led to the development of a larger vocabulary. The researchers noted, however, that the bots had a tendency to turn whole sentences into one-word utterances (which wouldn’t be desirable as an interpretable language).

Reinforcement Learning and the Creation of a Bot Langauge 

Reinforcement learning is a form of trial and error, where the bots are keeping track of what receives a reward and what doesn’t. If the bot, or the “dialog agent” in the case if Facebook’s research, receives a reward then it learns to continue that behavior. Agents are learning to modify their communication output to maximize the reward. As pointed out in FAIR’s report, “[d]uring reinforcement learning, the agent attempts to improve its parameters from conversations with another agent.”

The creation of the bot language came about when it was more efficient and more rewarding in bot-to-bot communication to have a shared language (as opposed to mimicking our human language). 

“[T]he researchers found that updating the parameters of both agents led to divergence from human language as the agents developed their own language for negotiating.”-Deal or no deal? Training AI bots to negotiate 

So is it bad that the agents diverged from human language? It is if the goal is always to mimic human language and also retaining the ability to decipher bot-to-bot communication. Then again, we may soon have bot translators.

Smarter faster: the Big Think newsletter
Subscribe for counterintuitive, surprising, and impactful stories delivered to your inbox every Thursday

===

Want to connect? Reach out @TechEthicist and on Facebook. Exploring the ethical, legal, and emotional impact of social media & tech. Co-host of the upcoming show, Funny as Tech.


Related

Up Next