Social networks on alert as vote nears
Crackdown aims to keep misinformation off sites
As voters prepare to cast their ballots in the midterm elections, social media companies have an additional concern: protecting that process.
After reports of fake accounts and false news stories infiltrating social networks during the 2016 presidential election, companies like Facebook and Twitter have doubled down on efforts to prevent election manipulation.
At stake is not only the validity of information found on their platforms but also the trust of their users.
Business Insider Intelligence’s Digital Trust Report 2018, released in August, reported that more than 75 percent of respondents said Facebook was “extremely likely” or “very likely” to show them deceptive content like “fake news, scams or click bait.” Twitter didn’t do much better, with more than 60 percent of respondents agreeing the platform has deceptive content.
In January 2017, reports emerged that foreign entities like the Russiabased Internet Research Agency used social media platforms to spread false and divisive information throughout the 2016 campaign. By September 2017, Facebook announced it had linked more than 3,000 political ads run on its platform between 2015 to 2017 to Russia. Facebook later said over 10 million users had been exposed to the ads.
In September, Facebook and Twitter executives testified before Congress about accusations that foreign operatives’ use of their platforms could have affected the presidential election.
Spokesmen for both Facebook and Twitter said in the aftermath of 2016 election, the companies have ramped up efforts to identify and remove fake accounts and protect users from false information.
Yoel Roth, Twitter’s head of site integrity, said the company has cracked down on “coordinated platform manipulation,” or people and organizations using Twitter to mislead other users and spread false information.
During the 2016 campaign, misinformation appeared online in the form of fake accounts and online publications that spread hyperpartisan views, among others. Leading up to the November midterms, experts say the techniques are similar, but people spreading misinformation have gotten smarter. The social networks have, too. “We haven’t seen a fundamental shift in what (the bad actors) are doing. But in 2016 it was like breaking into a house with the door wide open, and now there’s at least a dog inside that’s going to bark,” said Bret Schafer, a social media analyst at the Alliance for Securing Democracy, a bipartisan national security advocacy group.
Schafer said social networks’ efforts to protect their platforms and users have created a “layer of friction” that makes it more challenging to carry out misinformation campaigns. Efforts include cracking down on “bad actors” who use fake accounts to spread misinformation and requiring political advertisers to verify their identity by providing a legitimate mailing address.
Facebook has developed a multifaceted approach to elections integrity. The company has nearly doubled its security team ahead of the 2018 midterms and is taking a more proactive role in identifying “coordinated inauthentic behavior,” according to spokeswoman Brandi Hoffine Barr.
“We now have more than 20,000 people working on safety and security; we have put in place advanced systems and tools to detect and stop threats and developed backstops ... to help address any unanticipated threats as quickly as possible,” Hoffine Barr said.
Many of the company’s efforts begin with detecting and removing false accounts. In May, Facebook said it disabled nearly 1.3 billion fake accounts in the first half of 2018. As those accounts are often the source of false information on the site, Facebook said it’s combating false news’s spread by removing them.
Facebook also announced in October that it had removed 559 pages and 251 accounts for breaking the platform’s rules for “spam and coordinated inauthentic behavior,” which includes creating large networks of accounts to mislead other users. On Facebook, that can look like people or organizations creating false pages or fake accounts.
Hoffine Barr described Facebook’s work as “continuous effort” and said the company isn’t working in isolation.
“Ahead of the upcoming midterm elections, we are working closely with federal and state elections officials, as well as other technology companies, to coordinate our efforts and share information,” she said.
Two weeks before the midterms, Facebook uncovered a disinformation campaign from Iran that attempted to sow discord over hot-button issues.
Twitter has also taken action against bad actors, recently purging accounts the company had previously locked for “suspicious changes in behavior.” In an Oct. 1 blog post, Twitter executives detailed three “critical” areas of its efforts to preserve election integrity.
The first, an update to Twitter’s rules, includes expanding what Twitter considers a fake account. The company currently uses a number of criteria to make that determination, including whether the profile uses stolen or copied photos and provides intentionally misleading profile information. The second category is described as “detection and protection” and entails identifying spam accounts, as well as improving Twitter’s ability to ban users who violate policies.
The most visible efforts fall under “product developments.” From giving users control over the order of their timelines to adding an elections label for candidates’ accounts, this category is all about helping users stay informed.
“We now have more than 20,000 people working on safety and security.” Brandi Hoffine Barr Facebook spokeswoman
Silicon Valley companies are on guard against digital disinformation.