What is Big Think?  

We are Big Idea Hunters…

We live in a time of information abundance, which far too many of us see as information overload. With the sum total of human knowledge, past and present, at our fingertips, we’re faced with a crisis of attention: which ideas should we engage with, and why? Big Think is an evolving roadmap to the best thinking on the planet — the ideas that can help you think flexibly and act decisively in a multivariate world.

A word about Big Ideas and Themes — The architecture of Big Think

Big ideas are lenses for envisioning the future. Every article and video on bigthink.com and on our learning platforms is based on an emerging “big idea” that is significant, widely relevant, and actionable. We’re sifting the noise for the questions and insights that have the power to change all of our lives, for decades to come. For example, reverse-engineering is a big idea in that the concept is increasingly useful across multiple disciplines, from education to nanotechnology.

Themes are the seven broad umbrellas under which we organize the hundreds of big ideas that populate Big Think. They include New World Order, Earth and Beyond, 21st Century Living, Going Mental, Extreme Biology, Power and Influence, and Inventing the Future.

Big Think Features:

12,000+ Expert Videos

1

Browse videos featuring experts across a wide range of disciplines, from personal health to business leadership to neuroscience.

Watch videos

World Renowned Bloggers

2

Big Think’s contributors offer expert analysis of the big ideas behind the news.

Go to blogs

Big Think Edge

3

Big Think’s Edge learning platform for career mentorship and professional development provides engaging and actionable courses delivered by the people who are shaping our future.

Find out more
Close

Language Pragmatics: Why We Can't Talk to Computers

February 17, 2012, 3:00 PM
Pinker1

Speech recognition technology continues to fascinate language and cognitive science researchers, and Apple’s introduction of the Siri voice assistant program in its recent iPhone 4S was heralded by many as a great leap toward realizing the dream of a computer you can talk to. Fast-forward half a year later and, while Siri has proved to be practically useful and sometimes impressively accurate for dictation, the world has not been turned upside down. A quick Google search for “Siri fail” will provide you with the often unintentionally funny attempts by Apple’s voice recognition service to answer abstract questions or transcribe uncommon phrases.

But in a day and age when computers can win at Jeopardy and chess programs can consistently defeat the best human players, why hasn’t voice technology reached a similar plateau of mastery? Here is cognitive scientist, popular author, and Floating University professor Steven Pinker exploring the issue in a clip from his lecture “Say What? Linguistics as a Window to Understanding the Brain.”


In short, it’s all about context, and the kinds of leaps easily achieved by humans in casual conversation have thus far remained outside the reach of dynamic language processing programs. One need only watch this infamous 2008 Microsoft demonstration of its Windows Vista voice recognition software to be reminded how nascent this technology remains for the most part.


But Siri works pretty well much of the time, right? Interestingly, Apple approached the voice recognition game using an framework that is about as far from how humans understand speech as you can get. Every time you speak to Siri, your iPhone connects to a cloud service, according to a Smart Planet article by Andrew Nusca, and the following takes place:

The server compares your speech against a statistical model to estimate, based on the sounds you spoke and the order in which you spoke them, what letters might constitute it. (At the same time, the local recognizer compares your speech to an abridged version of that statistical model.) For both, the highest-probability estimates get the go-ahead.

Based on these opinions, your speech — now understood as a series of vowels and consonants — is then run through a language model, which estimates the words that your speech is comprised of. Given a sufficient level of confidence, the computer then creates a candidate list of interpretations for what the sequence of words in your speech might mean.

In this sense, Siri doesn’t really “understand” anything said to it, it simply uses a constantly expanding probability model to attach combinations of letters to the sounds you’re saying. And once it has computed the most likely identity of your words, it cross checks them against the server database of successful answers to similar combinations of words and provides you with a probable answer. This is a system of speech recognition that sidesteps the pragmatics question discussed by Pinker by employing a huge vocabulary and a real-time cloud-based feedback database. And Siri’s trademark cheekiness? Apple has thousands of writers employed inputting phrases and responses manually into the Siri cloud, continually building out its “vocabulary” while relying on statistics for the context.

Does this constitute true speech recognition, or is this just a more robust version of old-time AOL chat bots? If this is the way that speech recognition technology will evolve in the future, do you think that it will cross a database threshold so as to be indistinguishable from true speech recognition, even if there’s no pragmatic “ghost in the machine,” as it were? Or will computers never be able to truly "learn" language?

Visit The Floating University to learn more about our approach to disrupting higher education, or check out Steven Pinker's eSeminar “Say What? Linguistics as a Window to Understanding the Brain.”

 

Language Pragmatics: Why We...

Newsletter: Share: