A Quick and Dirty History of Artificial Intelligence
Between Microsoft's racist chatbot to beating the world GO champion, artificial intelligence has better things to do than whatever we're afraid of. Here's a recap of the highlights.
On Wednesday March 23, Microsoft unleashed its brand new AI on Twitter. Her name was Tay, and she was programmed to tweet like a teenage girl. Within 24 hours she tweeted like a Nazi:
Credit: Gerald Mallor/Twitter
Microsoft didn’t intend for that to happen, of course. It wanted to test and improve its algorithm for conversational language. According to Microsoft, Tay was built by “mining relevant [anonymous] public data” which was “modeled, cleaned, and filtered” to create her personality. The filtering went out the window when she went live, though, and the you can see the results above. Thankfully, she didn’t actually develop a hatred for humanity; she just parroted what users gave her. Microsoft’s put Tay on ice for now, but their exercise in creating artificial intelligence is just one step in humanity’s quest to create the perfect AI.
The goal to create a system that converses, learns, and solves problems on its own hasn’t changed since the term was coined in 1956. The hope is to make that system so smart and powerful it can solve problems human intelligence can’t. The fear is that the system will become smart enough to recognize humanity as a liability and wipe us off the planet. Scientists have wrestled with those challenges for decades and come up with a few different ways of addressing them.
One of the first systems was an AI that played checkers. It was built by IBM and beat a world champion. Other examples of early problem-solving systems include MIT-created programs that solved college calculus, algebra word problems, and played chess.
MIT took the system one step further in 1965 with ELIZA, the very first chatbot - and ancestor to Tay. She could “talk” about any English language subject by learning and recognizing key phrases and responding with pre-programmed responses. She became popular on ARPA-net when users realized that she sounded like a stuffy therapist, and set the model for every single chatbot since. If you’ve browsed a website and interacted with a customer service representative, rejected a weirdo trying to steal your bank account numbers on a messageboard, or played with a smart toy like Hello Barbie, you’re interacting with ELIZA’s grandkids. Her original programming is still live, if you’d like to try it out.
AI advancements in the 1970s and early 1980s focused on more theoretical and experimental applications, like a system’s ability to understand and process language. Government funding was limited due to concerns over the moral implications of a self-learning software system. Funding picked up in the late 80s but was stymied again in the early 1990s. Most companies in the space relied on government funding, but IBM saw potential in the consumer side of the technology and continued to pour money into it. Now IBM is the biggest player in the AI world, with over 88,000 patents - more than any of its competitors.
IBM’s gamble paid off in 1997 with Deep Blue. Deep Blue was a chess-playing computer that beat world champion Garry Kasparov. By playing against the computer program Wchess and Grandmaster Joel Benjamin, Deep Blue honed its abilities to identify and predict patterns of play. That memory, combined with immense processing power (a 32-node IBM RS/6000® SP high-performance computer, according to IBM), enabled Deep Blue to evaluate 200 million positions per second. That made Deep Blue the fastest computer to ever face a chess champion and one of the most powerful supercomputers ever built. Naturally, Kasparov accused Deep Blue of cheating. IBM chose to retire it rather than play a rematch, and put its power and lessons to good use. They created the Deep Computing Institute with the goal of solving “large, complex technological problems through deep computing—using large-scale advanced computational methods applied to large data sets.” The processing abilities of Deep Blue help scientists observe and process large amounts of raw data in fields as diverse as financial modeling and molecular dynamics. Deep Blue also changed the way that chess programs think, making them much better at identifying patterns in play. If you’ve ever wanted to chuck your computer’s chess program out the window, you’ve got Deep Blue to thank.
The most popular of these systems today is Watson. Created by IBM, Watson is an AI that “uses natural language processing and machine learning to reveal insights from large amounts of unstructured data.” Like Microsoft’s Tay, Watson’s programmers used public data to shore up Watson’s memory banks with natural language. IBM was so proud of it they put the AI on Jeopardy -- and, also like Tay, it hit a snag. Watson’s programmers had populated its memory banks from the Urban Dictionary, and were horrified to hear it answering questions with swear words and f-bombs. They quickly learned from the experience and gave Watson a smart filter to halt its potty mouth. They also wiped the Urban Dictionary from its memory for good measure. The improved Watson technology is doing all kinds of amazing things, from diagnosing patients to helping kids learn.
Technological advances are causing developments in artificial intelligence systems to happen more quickly than ever. From a system that defeated the world GO champion to one that wrote an almost-award-winning novel, artificial intelligence is here to stay. But if Tay and Watson are anything to go by, it’s nowhere close to wiping us off the face of the planet. And if you’re still not convinced, here’s theoretical physicist Lawrence Krauss explaining why:
To create wiser adults, add empathy to the school curriculum.
- Stories are at the heart of learning, writes Cleary Vaughan-Lee, Executive Director for the Global Oneness Project. They have always challenged us to think beyond ourselves, expanding our experience and revealing deep truths.
- Vaughan-Lee explains 6 ways that storytelling can foster empathy and deliver powerful learning experiences.
- Global Oneness Project is a free library of stories—containing short documentaries, photo essays, and essays—that each contain a companion lesson plan and learning activities for students so they can expand their experience of the world.
Philosophers like to present their works as if everything before it was wrong. Sometimes, they even say they have ended the need for more philosophy. So, what happens when somebody realizes they were mistaken?
Sometimes philosophers are wrong and admitting that you could be wrong is a big part of being a real philosopher. While most philosophers make minor adjustments to their arguments to correct for mistakes, others make large shifts in their thinking. Here, we have four philosophers who went back on what they said earlier in often radical ways.
The inequalities impact everything from education to health.
Astrophysicist Michelle Thaller talks ISS and why NICER is so important.
- Being outside of Earth's atmosphere while also being able to look down on the planet is both a challenge and a unique benefit for astronauts conducting important and innovative experiments aboard the International Space Station.
- NASA astrophysicist Michelle Thaller explains why one such project, known as NICER (Neutron star Interior Composition Explorer), is "one of the most amazing discoveries of the last year."
- Researchers used x-ray light data from NICER to map the surface of neutrons (the spinning remnants of dead stars 10-50 times the mass of our sun). Thaller explains how this data can be used to create a clock more accurate than any on Earth, as well as a GPS device that can be used anywhere in the galaxy.
Just before I turned 60, I discovered that sharing my story by drawing could be an effective way to both alleviate my symptoms and combat that stigma.