Russia-linked bots use attack to divide public
SAN FRANCISCO — One hour after news broke about the school shooting in Florida last week, Twitter accounts suspected of having links to Russia released hundreds of posts taking up the gun-control debate.
The accounts addressed the news with the speed of a cable news network. Some adopted the hashtag guncontrolnow. Others used gunreformnow and Parklandshooting. Earlier on Wednesday, before the mass shooting at Marjory Stoneman Douglas High School in Parkland, Florida, many of those accounts had been focused on the investigation by special counsel Robert Mueller into Russian meddling in the 2016 presidential election.
“This is pretty typical for them, to hop on breaking news like this,” said Jonathon Morgan, chief executive of New Knowledge, a company that tracks online disinformation campaigns. “The bots focus on anything that is divisive for Americans. Almost systematically.”
One of the most divisive issues in the nation is how to handle guns, pitting Second Amendment absolutists against proponents of gun control. And the messages from these automated accounts, or bots, were designed to widen the divide and make compromise even more difficult.
Any news event — no matter how tragic — has become fodder to spread inflammatory messages in what is believed to be a far-reaching Russian disinformation campaign. The disinformation comes in various forms: conspiracy videos on YouTube, fake interest groups on Facebook, and armies of bot accounts that can hijack a discussion on Twitter.
Those automated Twitter accounts have been closely tracked by researchers. Last year, the Alliance for Securing Democracy, in conjunction with the German Marshall Fund, a publicpolicy research group in Washington, created a website that tracks hundreds of Twitter accounts of human users and suspected bots that they have linked to a Russian influence campaign.
The researchers zeroed in on Twitter accounts posting information that was in step with material coming from well-known Russian propaganda outlets. To spot an automated bot, they looked for certain signs, such as an extremely high volume of posts or content that conspicuously matched that on hundreds of other accounts.
The bots are “going to find any contentious issue, and instead of making it an opportunity for compromise and negotiation, they turn it into an unsolvable issue bubbling with frustration,” said Karen North, a social media professor at the University of Southern California’s Annenberg School for Communication and Journalism. “It just heightens that frustration and anger.”
Researchers said they watched as the bots began posting about the Parkland shooting shortly after it happened.
When the Russian bots jumped on the hashtag Parklandshooting — initially created to spread news of the shooting — they quickly stoked tensions. Exploiting the issue of mental illness in the gun control debate, they propagated the notion that Nikolas Cruz, the suspected gunman, was a mentally ill “lone killer.” They also claimed that he had searched for Arabic phrases on Google before the shooting.