USA TODAY International Edition

Social networks on alert as vote nears

Crackdown aims to keep misinforma­tion off sites

- Cat Hofacker

As voters prepare to cast their ballots in the midterm elections, social media companies have an additional concern: protecting that process.

After reports of fake accounts and false news stories infiltrating social networks during the 2016 presidenti­al election, companies like Facebook and Twitter have doubled down on efforts to prevent election manipulati­on.

At stake is not only the validity of informatio­n found on their platforms but also the trust of their users.

Business Insider Intelligen­ce’s Digital Trust Report 2018, released in August, reported that more than 75 percent of respondent­s said Facebook was “extremely likely” or “very likely” to show them deceptive content like “fake news, scams or click bait.” Twitter didn’t do much better, with more than 60 percent of respondent­s agreeing the platform has deceptive content.

In January 2017, reports emerged that foreign entities like the Russiabase­d Internet Research Agency used social media platforms to spread false and divisive informatio­n throughout the 2016 campaign. By September 2017, Facebook announced it had linked more than 3,000 political ads run on its platform between 2015 to 2017 to Russia. Facebook later said over 10 million users had been exposed to the ads.

In September, Facebook and Twitter executives testified before Congress about accusation­s that foreign operatives’ use of their platforms could have affected the presidenti­al election.

Spokesmen for both Facebook and Twitter said in the aftermath of 2016 election, the companies have ramped up efforts to identify and remove fake accounts and protect users from false informatio­n.

Yoel Roth, Twitter’s head of site integrity, said the company has cracked down on “coordinate­d platform manipulati­on,” or people and organizati­ons using Twitter to mislead other users and spread false informatio­n.

During the 2016 campaign, misinforma­tion appeared online in the form of fake accounts and online publicatio­ns that spread hyperparti­san views, among others. Leading up to the November midterms, experts say the techniques are similar, but people spreading misinforma­tion have gotten smarter. The social networks have, too. “We haven’t seen a fundamenta­l shift in what (the bad actors) are doing. But in 2016 it was like breaking into a house with the door wide open, and now there’s at least a dog inside that’s going to bark,” said Bret Schafer, a social media analyst at the Alliance for Securing Democracy, a bipartisan national security advocacy group.

Schafer said social networks’ efforts to protect their platforms and users have created a “layer of friction” that makes it more challengin­g to carry out misinforma­tion campaigns. Efforts include cracking down on “bad actors” who use fake accounts to spread misinforma­tion and requiring political advertiser­s to verify their identity by providing a legitimate mailing address.

Facebook has developed a multifacet­ed approach to elections integrity. The company has nearly doubled its security team ahead of the 2018 midterms and is taking a more proactive role in identifyin­g “coordinate­d inauthenti­c behavior,” according to spokeswoma­n Brandi Hoffine Barr.

“We now have more than 20,000 people working on safety and security; we have put in place advanced systems and tools to detect and stop threats and developed backstops ... to help address any unanticipa­ted threats as quickly as possible,” Hoffine Barr said.

Many of the company’s efforts begin with detecting and removing false accounts. In May, Facebook said it disabled nearly 1.3 billion fake accounts in the first half of 2018. As those accounts are often the source of false informatio­n on the site, Facebook said it’s combating false news’s spread by removing them.

Facebook also announced in October that it had removed 559 pages and 251 accounts for breaking the platform’s rules for “spam and coordinate­d inauthenti­c behavior,” which includes creating large networks of accounts to mislead other users. On Facebook, that can look like people or organizati­ons creating false pages or fake accounts.

Hoffine Barr described Facebook’s work as “continuous effort” and said the company isn’t working in isolation.

“Ahead of the upcoming midterm elections, we are working closely with federal and state elections officials, as well as other technology companies, to coordinate our efforts and share informatio­n,” she said.

Two weeks before the midterms, Facebook uncovered a disinforma­tion campaign from Iran that attempted to sow discord over hot-button issues.

Twitter has also taken action against bad actors, recently purging accounts the company had previously locked for “suspicious changes in behavior.” In an Oct. 1 blog post, Twitter executives detailed three “critical” areas of its efforts to preserve election integrity.

The first, an update to Twitter’s rules, includes expanding what Twitter considers a fake account. The company currently uses a number of criteria to make that determinat­ion, including whether the profile uses stolen or copied photos and provides intentiona­lly misleading profile informatio­n. The second category is described as “detection and protection” and entails identifyin­g spam accounts, as well as improving Twitter’s ability to ban users who violate policies.

The most visible efforts fall under “product developmen­ts.” From giving users control over the order of their timelines to adding an elections label for candidates’ accounts, this category is all about helping users stay informed.

“We now have more than 20,000 people working on safety and security.” Brandi Hoffine Barr Facebook spokeswoma­n

 ?? GETTY IMAGES ?? Silicon Valley companies are on guard against digital disinforma­tion.
GETTY IMAGES Silicon Valley companies are on guard against digital disinforma­tion.

Newspapers in English

Newspapers from United States