Language Pragmatics: Why We Can't Talk to Computers

Speech recognition technology continues to fascinate language and cognitive science researchers, and Apple’s introduction of the Siri voice assistant program in its recent iPhone 4S was heralded by many as a great leap toward realizing the dream of a computer you can talk to. Fast-forward half a year later and, while Siri has proved to be practically useful and sometimes impressively accurate for dictation, the world has not been turned upside down. A quick Google search for “Siri fail” will provide you with the often unintentionally funny attempts by Apple’s voice recognition service to answer abstract questions or transcribe uncommon phrases.


But in a day and age when computers can win at Jeopardy and chess programs can consistently defeat the best human players, why hasn’t voice technology reached a similar plateau of mastery? Here is cognitive scientist, popular author, and Floating University professor Steven Pinker exploring the issue in a clip from his lecture “Say What? Linguistics as a Window to Understanding the Brain.”


But Siri works pretty well much of the time, right? Interestingly, Apple approached the voice recognition game using an framework that is about as far from how humans understand speech as you can get. Every time you speak to Siri, your iPhone connects to a cloud service, according to a Smart Planet article by Andrew Nusca, and the following takes place:

The server compares your speech against a statistical model to estimate, based on the sounds you spoke and the order in which you spoke them, what letters might constitute it. (At the same time, the local recognizer compares your speech to an abridged version of that statistical model.) For both, the highest-probability estimates get the go-ahead.

Based on these opinions, your speech — now understood as a series of vowels and consonants — is then run through a language model, which estimates the words that your speech is comprised of. Given a sufficient level of confidence, the computer then creates a candidate list of interpretations for what the sequence of words in your speech might mean.

In this sense, Siri doesn’t really “understand” anything said to it, it simply uses a constantly expanding probability model to attach combinations of letters to the sounds you’re saying. And once it has computed the most likely identity of your words, it cross checks them against the server database of successful answers to similar combinations of words and provides you with a probable answer. This is a system of speech recognition that sidesteps the pragmatics question discussed by Pinker by employing a huge vocabulary and a real-time cloud-based feedback database. And Siri’s trademark cheekiness? Apple has thousands of writers employed inputting phrases and responses manually into the Siri cloud, continually building out its “vocabulary” while relying on statistics for the context.

Does this constitute true speech recognition, or is this just a more robust version of old-time AOL chat bots? If this is the way that speech recognition technology will evolve in the future, do you think that it will cross a database threshold so as to be indistinguishable from true speech recognition, even if there’s no pragmatic “ghost in the machine,” as it were? Or will computers never be able to truly "learn" language?

Visit The Floating University to learn more about our approach to disrupting higher education, or check out Steven Pinker's eSeminar “Say What? Linguistics as a Window to Understanding the Brain.”

3D printing might save your life one day. It's transforming medicine and health care.

What can 3D printing do for medicine? The "sky is the limit," says Northwell Health researcher Dr. Todd Goldstein.

Northwell Health
Sponsored by Northwell Health
  • Medical professionals are currently using 3D printers to create prosthetics and patient-specific organ models that doctors can use to prepare for surgery.
  • Eventually, scientists hope to print patient-specific organs that can be transplanted safely into the human body.
  • Northwell Health, New York State's largest health care provider, is pioneering 3D printing in medicine in three key ways.
Keep reading Show less

Depression is different for everyone. Here’s what it’s like for me.

Depression is quicksand, says comedian Pete Holmes. Try this method to help you cope and live with depression.

Videos
  • Everyone's experience with depression is different, but for comedian Pete Holmes the key to living with depression has been to observe his own thoughts in an impartial way.
  • Holmes' method, taught to him by psychologist and spiritual leader Ram Dass, is to connect to his base consciousness and think about himself and his emotions in the third person.
  • You can't push depression away, but you can shift your mindset to help better cope with depression, anxiety, and negative emotions. If you feel depressed, you can connect with a crisis counselor anytime in the US.
Keep reading Show less

Why the south of Westeros is the north of Ireland

As Game of Thrones ends, a revealing resolution to its perplexing geography.

Image: YouTube / Doosh
Strange Maps
  • The fantasy world of Game of Thrones was inspired by real places and events.
  • But the map of Westeros is a good example of the perplexing relation between fantasy and reality.
  • Like Britain, it has a Wall in the North, but the map only really clicks into place if you add Ireland.
Keep reading Show less
Big Think Edge
  • Conformity is not conducive to good problem solving, says economist and author Tim Harford.
  • The opposite of conformity? Diversity.
  • The kind of discussions that diversity facilitates actually improve the ability of groups to arrive at effective solutions.