Skip to content
Technology & Innovation

Russian bots tried to hijack the gun debate. Did it work?

The disinformation campaigns made famous in 2016 continue.
A Russian police officer patrols Red Square with the Kremlin in the background January 7, 2003 in Moscow

“Dezinformatsiya.” 


This is Russian for “disinformation,” which is the deliberate spread of false information in order to deceive. It worked like a charm during the last election and more sharply cleaved the political divide we have in the United States. (It was also used in the Brexit vote months before that…)

It’s back (never really left, actually), and it’s getting stronger. 

MOSCOW – JANUARY 7: A Russian police officer, braving the bitter cold, patrols Red Square with the Kremlin in the background January 7, 2003 in Moscow. Temperatures fell to 27 degrees below zero Celcius (17 below zero Fahrenheit) in Moscow. (Photo by Oleg Nikishin/Getty Images)

During the immediate coverage of the Parkland, Florida high school shooting this week, Russian bots swarmed Twitter in an attempt to sow discord and false information about gun rights, gun control, and even what actually happened in Parkland.

As the #ParklandShooting hashtag began to take off, the bots attempted to boost far-right extremist messaging and responses from the NRA, creating even more division and causing massive Twitter wars as the story unfolded. 

According to Hamilton 68, a site created by Alliance for Securing Democracy, Tweets and hashtags from accounts linked to Russian influence campaigns increased dramatically as details emerged, including keywords such as “Parkland,” “guncontrolnow,” and “Nikolas Cruz,” the name of the shooter. 

Another site that follows such bots, Botcheck.Me by RoBhat Labs, all of the top two-word phrases used immediately after the shooting were things like “school shooting, “gun control,” and “Florida school.” 


Image: Alliance for Securing Democracy/Getty

What happens, basically, is that these bots, ultimately controlled by humans, generate Tweets about a specific event with those keywords and then — conveniently for them — actual humans pick up and Retweet them, reply to them, etc. 

In addition, sometimes those very same bots jump on an existing hashtag or phrase and then amplify it with their own messaging, in an attempt to take control of the conversation.

In an actual example of what happened in this case, the bots grabbed images from the shooter’s Instagram holding knives and guns and wrapped in camouflage, coupled with screenshots of the words, “Allahu Akbar,” in an attempt to cast the shooter as a lone wolf or possibly Muslim terrorist, to push the narrative that there’s nothing that can be done to stop people like him.

In addition, Twitter bots generated traction for the idea that the shooter was from the left radical group Antifa (“Anti-Fascist”), when in fact the Associated Press reported that he was a member of a local white nationalist group. 

The primary reason these Russian bots are doing this? Likely to sow discord and division, which eventually, destabilizes everything good about the United States. Just look where we are in history, and you can see that it’s working. At times, the Russian bots gin up the hate and vitriol on both sides of the debate, and in at least one case, even encouraged battles in the streets.

Smarter faster: the Big Think newsletter
Subscribe for counterintuitive, surprising, and impactful stories delivered to your inbox every Thursday

Will it also work during our mid-term elections in just 8 months? 

Unless Twitter takes action, and the public is better educated about this kind of thing, it’s inevitable.

Time will tell. 


Related

Up Next