Facebook Catches Two Chatbots Speaking Their Own Language

Facebook catches two AI chatbots talking in their own strange language.

bot chat
(ROBERT HEIM)


Bob: i can i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i . . . . . . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i everything else . . . . . . . . . . . . . .

Alice: balls have 0 to me to me to me to me to me to me to me to me to

Bob: you i i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

These are presumably the last words — for now at least — of Bob and Alice, two Facebook research chatbots, instructed to negotiate with each other for some balls. If you understand it, you’re doing better than Facebook engineers, who shut down them down. “Our interest was having bots who could talk to people,” Facebook’s Mike Lewis told FastCoDesign.

Bob's and Alice's assignment (FACEBOOK)

This kind of thing is, of course, inevitable, even if it is alarming. Who wants machines talking behind our backs, or worse, like parents of a toddler spelling out words, right in front of us in a way we can’t comprehend. It’s high on the list of concerns expressed by people, like Elon Musk, who’ve been shouting loud warnings of the dangers inherent in the development of AI.

In this case, it’s not an entire language but more of a machine-friendly shorthand, and it’s being seen over and over again with AI. Dhruv Batra, visiting researcher at Facebook AI Research (FAIR) also speaking with FastCoDesign, “Agents will drift off understandable language and invent codewords for themselves. Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

It’s not really chatbots we need to be concerned about right now, since the major companies working to develop them — Facebook, Apple, Google, Amazon, and Microsoft — are currently focused on bots that can communicate clearly with humans, and as Batra puts it, “It’s important to remember, there aren’t bilingual speakers of AI and human languages.”

Ironically, the Google AI developed for their Translate feature, Google Neural Machine Translation (GNMT), has reportedly developed its interlingua that holds meanings it needs to convert from one human tongue to another.

Google Translate moves meaning from one human language to the interlingua, and then translates the interlingua into the target language.

(GOOGLE)

It’s the less user-facing AI whose choice of language might have us concerned: It may ultimately be capable of not only developing language we don’t know, but that may be beyond our capabilities. We break down meanings into words or short combinations of words. “The reason why humans have this idea of decomposition,” says Batra, “breaking ideas into simpler concepts, it’s because we have a limit to cognition.” High-powered computers seeking efficiency in their processes, on the other hand, may be able to use words or phrases as “tokens” stand-ins for highly complex meanings. We’d be completely lost trying to keep up.

Even so, it may be in programmers’ interest to go ahead and let AI communicate in a manner of its own choosing, since presumably it will be able to find the shortest distance between two points, if you will, better than we can.

So far, when we do eavesdrop on AIs talking, it’s been a lot more mundane than scary, as with Bob’s and Alice’s shorthand. (They did successfully complete some of their negotiations for balls, hast, and books, by the way.)

There may be time still to cram the genie back in the bottle. Whether or not we should is what’s keeping people up at night.

How New York's largest hospital system is predicting COVID-19 spikes

Northwell Health is using insights from website traffic to forecast COVID-19 hospitalizations two weeks in the future.

Credit: Getty Images
Sponsored by Northwell Health
  • The machine-learning algorithm works by analyzing the online behavior of visitors to the Northwell Health website and comparing that data to future COVID-19 hospitalizations.
  • The tool, which uses anonymized data, has so far predicted hospitalizations with an accuracy rate of 80 percent.
  • Machine-learning tools are helping health-care professionals worldwide better constrain and treat COVID-19.
Keep reading Show less

Listen: Scientists re-create voice of 3,000-year-old Egyptian mummy

Scientists used CT scanning and 3D-printing technology to re-create the voice of Nesyamun, an ancient Egyptian priest.

Surprising Science
  • Scientists printed a 3D replica of the vocal tract of Nesyamun, an Egyptian priest whose mummified corpse has been on display in the UK for two centuries.
  • With the help of an electronic device, the reproduced voice is able to "speak" a vowel noise.
  • The team behind the "Voices of the Past" project suggest reproducing ancient voices could make museum experiences more dynamic.
Keep reading Show less

Dark matter axions possibly found near Magnificent 7 neutron stars

A new study proposes mysterious axions may be found in X-rays coming from a cluster of neutron stars.

A rendering of the XMM-Newton (X-ray multi-mirror mission) space telescope.

Credit: D. Ducros; ESA/XMM-Newton, CC BY-SA 3.0 IGO
Surprising Science
  • A study led by Berkeley Lab suggests axions may be present near neutron stars known as the Magnificent Seven.
  • The axions, theorized fundamental particles, could be found in the high-energy X-rays emitted from the stars.
  • Axions have yet to be observed directly and may be responsible for the elusive dark matter.
  • Keep reading Show less

    Put on a happy face? “Deep acting” associated with improved work life

    New research suggests you can't fake your emotional state to improve your work life — you have to feel it.

    Credit: Columbia Pictures
    Personal Growth
  • Deep acting is the work strategy of regulating your emotions to match a desired state.
  • New research suggests that deep acting reduces fatigue, improves trust, and advances goal progress over other regulation strategies.
  • Further research suggests learning to attune our emotions for deep acting is a beneficial work-life strategy.
  • Keep reading Show less
    Surprising Science

    World's oldest work of art found in a hidden Indonesian valley

    Archaeologists discover a cave painting of a wild pig that is now the world's oldest dated work of representational art.

    Scroll down to load more…
    Quantcast