Indictments reveal how Russia stirs up discord within U.S.
Russia has been trolling the United States for decades.
It bankrolled American authors who claimed Lee Harvey Oswald assassinated President John F. Kennedy under the direction of the FBI and CIA; it planted articles arguing Martin Luther King Jr. was not radical enough; and it spread a conspiracy theory that the U.S. manufactured the AIDS virus.
None of these disinformation campaigns succeeded in undermining American stability, in part because the Soviets didn’t have access to what may be the world’s most powerful weapon for fomenting fear, outrage and unverified information: social media.
The indictments last week by special counsel Robert S. Mueller III against 13 Russians and three Russian companies accused of interfering in the 2016 presidential election laid bare the way America’s biggest tech platforms have altered the centuries-old game of spycraft and political warfare.
Russian operatives couldn’t have asked for better tools than Facebook and Twitter to spark conflict and deepen divisions within Americans, experts say. Never before could they fan propaganda with such ease and speed and needle the people most vulnerable to misinformation with such precision.
“They’re using the same playbook; it’s just a new medium,” said Clint Watts, a former FBI agent and a senior fellow at the Center for Cyber and Homeland Security at George Washington University. “Social media is where you do this stuff now. It wasn’t possible during the Cold War.”
At the root of the strategy are the algorithms social networks employ to encourage more engagement — the comments, likes and shares that generate advertising revenue for their makers.
The problem, researchers say, is that humans typically gravitate toward things that make us angry online. Outrage generates more stimuli in our brains, increasing the odds we respond to news and posts that tick us off. The algorithms know this and serve up such content accordingly.
“Online platforms have profoundly changed the incentives of information sharing,” Yale psychologist M.J. Crockett wrote in a paper for Nature Human Behavior. “Because they compete for our attention to generate advertising revenue, their algorithms promote content that is most likely to be shared, regardless of whether it benefits those who share it — or is even true.”
Since the platforms insist they aren’t media companies, they’re under no legal obligation to verify what’s posted. That allows falsehoods to spread faster, not in the least part because most people don’t actually read the links they share, according to a 2016 study by researchers at Columbia University and the French National Institute.