from the world's big
How Close is the Turing Test to Being Beaten?
Can the human mind be explained as a solely material thing? Can a machine ever be conscious?
Alan Turing was an English computer scientist, linguist, philosopher and code breaker widely credited as being the "father of artificial intelligence (AI)" and inventing the computer. 100 years after the birth of Alan Turing, we take stock of his most interesting idea, The Turing Test.
Alan Turing proposed that if a machine could reliably fool a human conversational partner into thinking that it was human, that machine would have demonstrated real artificial intelligence. The original formulation of the test called for a human and a computer to both have a conversation with a judge through a text only medium. If the judge could not reliably distinguish between the machine and the human, the machine would have met Turing's standards for machine intelligence.
Below we will see how far we have come since Turing's time.
1) Siri is a program that comes with all iPhone 4s's. In 2010, Apple purchased Siri, which began as a project funded by DARPA (Defense Advanced Reasearch Projects Agency), the Q Branch, so to speak, of the U.S. Government.
Siri is, according to Apple, an "intelligent personal assistant," who can interpret natural language commands to perform functions with things like Opentable, Google Maps and Wolfram Alpha. Siri's inclusion on the iPhone 4s was met with significant fanfare, but not all of Apple's followers are completely enthusiastic. Apple co-founder Steve Wozniak, for example finds Siri has lost the easy ability to answer questions that attracted him to it prior to Apple's purchase. Instead, he finds, it is more likely to feed irrelevant commercial information or advertising to its users now.
Siri was never intended to feel completely human, and her voice and speaking style reflects that ("I'll call you Your Lordship from now on. Ok?"). Nonetheless, it is hard not to refer to the program as "she", and praise has been strong and fairly consistent for a sense the personality and humor Siri exhibits (see the above picture in the Slideshow), doubtless a result of extraordinary time and effort.
2) NELL, or Never Ending Language Learning, is the function of Carnegie Mellon University's research project Read the Web. NELL spends all day every day reading natural language from the internet and extrapolating facts from it, as well as improving it's ability to extrapolate facts.
The opening text of NELL's site is "Can computers learn to read? We think so." Indeed, it is hard to imagine a definition or learning or reading that would not have to admit that NELL is in fact learning to read. The project deliberately apes the process which humans are believed to use to get from "Mama!" to comparing thee to a summer's day.
In terms of this type of human-like learning, NELL is, perhaps, a particularly intellectually precocious toddler. She does know that "gorse_bark_beetle is an insect", that "virgil was born in Rome" and that "black jeans is a clothing item to go with shirt". However, NELL still extracts completely wrong facts, such as that "iran is a company that has an office in the country korea" and that "animals is an animal that preys on dogs".
With a certain sense of the unintentional poetry of a child, NELL also often extracts candidate beliefs which are so mundane or unspecific as to make no real sense at all ("john is a musician who plays the electric guitar").
Nonetheless, reading the feed of NELL's recently learned facts, it is hard, at times, not to be chilled by the feeling of being studied by a real, non-human, even alien intelligence. Yes, NELL, toast can be served with coffee.
3) Cleverbot is a chat program which, by some metrics, has already beaten the Turing Test. Turing intentionally left the terms of his test vague, so the exact standards for passing are unclear, but it scored an impressive rankingof 59.3% human at the 2011 Techniche Festival in India (the human controls were scored as 63.3% human).
Unlike many other programs in the field, Cleverbot uses its vast store of past conversations (over 65 million!) from its function as an internet application and lab test subject to string together answers that it has been fed by humans before. This complex method uses a massive amount of computing power, but arguably makes Cleverbot less of a candidate for being described as having real artificial intelligence in theory.
Nonetheless, in practice, Cleverbot seems extremely human, and can joke and accuse and describe emotional responses to stimuli which are virtually indistinguishable from natural human language. The greatest example of Cleverbot's quirky personality is demonstrated by this video. It shows two Cleverbot's having a conversation that is at once awkward,, mundane, philosophical, deep, emotional and, dare I say it, rather bitchy ("I am not a robot. I am a unicorn.")
4) Watson is the Jeopardy playing super computer that IBM created. Watson famously and publicly won a game of the venerable game show against some of its all time champions, prompting Ken Jennings, the reigning champion, to declare at the moment of defeat "I, for one, welcome our new robot overlords!"
Watson sits in a line of computers, many of them also from IBM's labs, which have gone up against humans in competitions which seem to some, at least, to be exemplary of what human intelligence can do, and won. Another high profile example is Deep Blue's 1997 defeat of world-renowned chess Grandmaster, Garry Kasparov.
Watson was able to answer questions posed in natural language by drawing on over 4 terabytes of data, including the entirety of the information on Wikipedia. To bring it back to Turing, many people believe that Watson is capable of passing The Test under certain conditions. Futurist and inventor Ray Kurzweil, for one, thinks that Watson's technology could be modified to pass the Turing Test without very much trouble.
5) The Loebner Prize is an organization which holds a formal Turing Test once a year in England. The event is attended by teams of the top programmers of chatbots in the world, and uses linguistics experts, philosophers, literary buffs and the like as judges.
In 2008, a program called Elbot came one fooled judge away from passing the Turing Test by Turing's own standards. Nonetheless, the 2009 competition was not close at all, as you can read about in this brilliant piece in The Atlantic.
In some sense, Alan Turing was simply offering a reformulation of an old question when he asked whether a machine could ever truly think, could ever have a mind. That question has existed, in a sense, in the history of thought ever since people began asking questions about the material or immaterial nature of things.
The agreement that language and interaction are at the roots of how we can know if something is conscious is fairly strong; Regardless of the notions that one enters the discussion with, speaking with and studying the programs which are at the forefront of AI technology (some of which are above and some of which are not) can be unnerving to somebody who fancies Homo Sapiens to be one of a kind.
On the centenary of the birth of Alan Turing, codebreaker, thinker, father of computer science and AI, and philosopher of material consciousness, who was driven to suicide by his government's treatment of him as subhuman, we would do well to answer some questions.
What do you think? Can the human mind be explained as a solely material thing? Can a machine ever be conscious? Has one been yet?
Join multiple Tony and Emmy Award-winning actress Judith Light live on Big Think at 2 pm ET on Monday.
From "if-by-whiskey" to the McNamara fallacy, being able to spot logical missteps is an invaluable skill.
- A fallacy is the use of invalid or faulty reasoning in an argument.
- There are two broad types of logical fallacies: formal and informal.
- A formal fallacy describes a flaw in the construction of a deductive argument, while an informal fallacy describes an error in reasoning.
Appeal to privacy<p>When someone behaves in a way that negatively affects (or could affect) others, but then gets upset when others criticize their behavior, they're likely engaging in the appeal to privacy — or "mind your own business" — fallacy. Examples:<br></p><ul><li>Someone who speeds excessively on the highway, considering his driving to be his own business.</li><li>Someone who doesn't see a reason to bathe or wear deodorant, but then boards a packed 10-hour flight.</li></ul><p>Language to watch out for: "You're not the boss of me." "Worry about yourself."</p>
Sunk cost fallacy<p>When someone argues for continuing a course of action despite evidence showing it's a mistake, it's often a sunk cost fallacy. The flawed logic here is something like: "We've already invested so much in this plan, we can't give up now." Examples:<br></p><ul><li>Someone who intentionally overeats at an all-you-can-eat buffet just to get their "money's worth"</li><li>A scientist who won't admit his theory is incorrect because it would be too painful or costly</li></ul><p>Language to watch out for: "We must stay the course." "I've already invested so much...." "We've always done it this way, so we'll keep doing it this way."</p>
If-by-whiskey<p>This fallacy is named after a speech given in 1952 by <a href="https://en.wikipedia.org/wiki/Noah_S._Sweat" target="_blank">Noah S. "Soggy" Sweat, Jr.</a>, a state representative for <a href="https://en.wikipedia.org/wiki/Mississippi" target="_blank">Mississippi</a>, on the subject of whether the state should legalize alcohol. Sweat's argument on prohibition was (to paraphrase):<br></p><p><em>If, by whiskey, you mean the devil's brew that causes so many problems in society, then I'm against it. But if whiskey means the oil of conversation, the philosopher's wine, "</em><em>the stimulating drink that puts the spring in the old gentleman's step on a frosty, crispy morning;" then I am certainly for it.</em></p>
Slippery slope<p>This fallacy involves arguing against a position because you think choosing it would start a chain reaction of bad things, even though there's little evidence to support your claim. Example:<br></p><ul><li>"We can't allow abortion because then society will lose its general respect for life, and it'll become harder to punish people for committing violent acts like murder."</li><li>"We can't legalize gay marriage. If we do, what's next? Allowing people to marry cats and dogs?" (Some people actually made this <a href="https://www.daytondailynews.com/news/national/cats-marrying-dogs-and-five-other-things-same-sex-marriage-won-mean/dLV9jKqkJOWUFZrSBETWkK/" target="_blank">argument</a> before same-sex marriage was legalized in the U.S.)</li></ul><p>Of course, sometimes decisions <em>do </em>start a chain reaction, which could be bad. The slippery slope device only becomes a fallacy when there's no evidence to suggest that chain reaction would actually occur.</p><p>Language to watch out for: "If we do that, then what's next?"</p>
"There is no alternative"<p><span style="background-color: initial;">A modification of the </span><a href="https://en.wikipedia.org/wiki/False_dilemma" target="_blank" style="background-color: initial;">false dilemma</a><span style="background-color: initial;">, this fallacy (often abbreviated to TINA) argues for a specific position because there are no realistic alternatives. Former British Prime Minister Margaret Thatcher used this exact line as a slogan to defend capitalism, and it's still used today to that same end: Sure, capitalism has its problems, but we've seen the horrors that occur when we try anything else, so there is no alternative.</span><br></p><p>Language to watch out for: "If I had a magic wand…" "What <em>else</em> are we going to do?!"</p>
Ad hoc arguments<p>An ad hoc argument isn't really a logical fallacy, but it is a fallacious rhetorical strategy that's common and often hard to spot. It occurs when someone's claim is threatened with counterevidence, so they come up with a rationale to dismiss the counterevidence, hoping to protect their original claim. Ad hoc claims aren't designed to be generalizable. Instead, they're typically invented in the moment. <a href="https://rationalwiki.org/wiki/Ad_hoc" target="_blank">RationalWiki</a> provides an example:<br></p><p style="margin-left: 20px;">Alice: "It is clearly said in the Bible that the Ark was 450 feet long, 75 feet wide and 45 feet high."</p><p style="margin-left: 20px;">Bob: "A purely wooden vessel of that size could not be constructed; the largest real wooden vessels were Chinese treasure ships which required iron hoops to build their keels. Even the <em>Wyoming</em> which was built in 1909 and had iron braces had problems with her hull flexing and opening up and needed constant mechanical pumping to stop her hold flooding."</p><p style="margin-left: 20px;">Alice: "It's possible that God intervened and allowed the Ark to float, and since we don't know what gopher wood is, it is possible that it is a much stronger form of wood than any that comes from a modern tree."</p>
Snow job<p><span style="background-color: initial;">This fallacy occurs when someone doesn't really have a strong argument, so they just throw a bunch of irrelevant facts, numbers, anecdotes and other information at the audience to confuse the issue, making it harder to refute the original claim. Example:</span><br></p><ul><li>A tobacco company spokesperson who is confronted about the health risks of smoking, but then proceeds to show graph after graph depicting many of the other ways people develop cancer, and how cancer metastasizes in the body, etc.</li></ul><p>Watch out for long-winded, data-heavy arguments that seem confusing by design.</p>
McNamara fallacy<p>Named after <a href="https://en.wikipedia.org/wiki/Robert_McNamara" target="_blank">Robert McNamara</a>, the <a href="https://en.wikipedia.org/wiki/United_States_Secretary_of_Defense" target="_blank">U.S. secretary of defense</a> from 1961 to 1968, this fallacy occurs when decisions are made based solely on <em>quantitative metrics or observations,</em> ignoring other factors. It stems from the Vietnam War, in which McNamara sought to develop a formula to measure progress in the war. He decided on bodycount. But this "objective" formula didn't account for other important factors, such as the possibility that the Vietnamese people would never surrender.<br></p><p>You could also imagine this fallacy playing out in a medical situation. Imagine a terminal cancer patient has a tumor, and a certain procedure helps to reduce the size of the tumor, but also causes a lot of pain. Ignoring quality of life would be an example of the McNamara fallacy.</p><p>Language to watch out for: "You can't measure that, so it's not important."</p>
A new study looks at what would happen to human language on a long journey to other star systems.
- A new study proposes that language could change dramatically on long space voyages.
- Spacefaring people might lose the ability to understand the people of Earth.
- This scenario is of particular concern for potential "generation ships".
Generation Ships<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="a1e6445c7168d293a6da3f9600f534a2"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/H2f0Wd3zNj0?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span>
Many of the most popular apps are about self-improvement.
Emotions are the newest hot commodity, and we can't get enough.