White Noise Computer Virus Could Bring down Artificial Intelligence
Nightmare scenarios involving Artificial Intelligence typically involve computers that become too smart for their own good and turn against their creators. In 2001: A Space Odyssey, HAL 9000 famously refused to open the pod bay doors for Dave:
Well, now we have an entirely different cause to be wary of AI, and the culprit is human rather than machine. Dave Gershgorn reports in Popular Science that a risk more imminent and more worrisome than HAL comes with everyday devices using even rather basic versions of AI: Siri (the virtual assistant on Apple iPhones), Alexa (Amazon’s assistant) and Google Now (for Android phones). The problem isn’t that the computer programs are too smart; it’s that they’re too gullible. Like the prisoners chained in Plato’s cave who mistake manufactured shadows on the wall with reality, Siri and her virtual posse tend to believe what they hear. And that’s a problem if someone with a sinister motive decides to use your phone against you.
Gershgorn quotes a researcher who says his team has ”been able to activate open source audio recognizers, Siri, and Google Now…with accuracy on all three more than 90 percent.” It doesn’t take a menacing recording to break into the phones; instead, “a science-fiction alien transmission…a garbled mix of white noise and human voice” is enough to gain access, and the sound is “certainly unrecognizable as a command.” In other words, you’d have no idea you were being attacked.
How might this work?
With this attack, any phone that hears the noise (they have to specifically target iOS or Android) could be unknowingly forced to visit a webpage that plays the noise, and thus infect other phones near it… In that same scenario, the webpage could also silently download malware onto the device. There’s also the possibility these noises could be played over the radio, hidden in white noise or background audio.
Researchers at Google are laboring to come up with defenses against this new front in cyber-warfare, but they are not confident they’ll be able to claim victory anytime soon. Gershgorn again:
Kantchelian says that he doesn’t think the door is completely closed for any of these attacks, even with the promising research from the Google team.
“At least in computer security, unfortunately the attackers are always ahead of us,” Kantchelian says. “So it’s going to be a little dangerous to say we solved all the problems of adversarial machine learning by retraining.”
The Achilles heel of machines is an inability to discriminate between sources of knowledge and information that are trustworthy and those that are deceitful and manipulative. This is another way of saying that computers are more similar to their human creators than anyone may have thought. That appears to be a mixed blessing.
—
Steven V. Mazie is Professor of Political Studies at Bard High School Early College-Manhattan and Supreme Court correspondent for The Economist. He holds an A.B. in Government from Harvard College and a Ph.D. in Political Science from the University of Michigan. He is author, most recently, of American Justice 2015: The Dramatic Tenth Term of the Roberts Court.
Image credit: Shutterstock.com
Follow Steven Mazie on Twitter: @stevenmazie