So­cial net­works on alert as vote nears

Crack­down aims to keep mis­in­for­ma­tion off sites

USA TODAY Weekend Extra - - NEWS - Cat Ho­facker

As vot­ers pre­pare to cast their bal­lots in the midterm elec­tions, so­cial me­dia com­pa­nies have an ad­di­tional concern: pro­tect­ing that process.

Af­ter re­ports of fake ac­counts and false news sto­ries in­fil­trat­ing so­cial net­works dur­ing the 2016 pres­i­den­tial elec­tion, com­pa­nies like Face­book and Twit­ter have dou­bled down on ef­forts to pre­vent elec­tion ma­nip­u­la­tion.

At stake is not only the va­lid­ity of in­for­ma­tion found on their plat­forms but also the trust of their users.

Busi­ness In­sider In­tel­li­gence’s Dig­i­tal Trust Re­port 2018, re­leased in Au­gust, re­ported that more than 75 per­cent of re­spon­dents said Face­book was “ex­tremely likely” or “very likely” to show them de­cep­tive con­tent like “fake news, scams or click bait.” Twit­ter didn’t do much bet­ter, with more than 60 per­cent of re­spon­dents agree­ing the plat­form has de­cep­tive con­tent.

In Jan­uary 2017, re­ports emerged that for­eign en­ti­ties like the Rus­si­abased In­ter­net Re­search Agency used so­cial me­dia plat­forms to spread false and di­vi­sive in­for­ma­tion through­out the 2016 cam­paign. By Septem­ber 2017, Face­book an­nounced it had linked more than 3,000 po­lit­i­cal ads run on its plat­form be­tween 2015 to 2017 to Rus­sia. Face­book later said over 10 mil­lion users had been ex­posed to the ads.

In Septem­ber, Face­book and Twit­ter ex­ec­u­tives tes­ti­fied be­fore Congress about ac­cu­sa­tions that for­eign op­er­a­tives’ use of their plat­forms could have af­fected the pres­i­den­tial elec­tion.

Spokes­men for both Face­book and Twit­ter said in the af­ter­math of 2016 elec­tion, the com­pa­nies have ramped up ef­forts to iden­tify and re­move fake ac­counts and pro­tect users from false in­for­ma­tion.

Yoel Roth, Twit­ter’s head of site in­tegrity, said the com­pany has cracked down on “co­or­di­nated plat­form ma­nip­u­la­tion,” or peo­ple and or­ga­ni­za­tions us­ing Twit­ter to mis­lead other users and spread false in­for­ma­tion.

Dur­ing the 2016 cam­paign, mis­in­for­ma­tion ap­peared on­line in the form of fake ac­counts and on­line pub­li­ca­tions that spread hy­per­par­ti­san views, among oth­ers. Lead­ing up to the Novem­ber midterms, experts say the tech­niques are sim­i­lar, but peo­ple spread­ing mis­in­for­ma­tion have got­ten smarter. The so­cial net­works have, too. “We haven’t seen a fun­da­men­tal shift in what (the bad ac­tors) are do­ing. But in 2016 it was like break­ing into a house with the door wide open, and now there’s at least a dog in­side that’s go­ing to bark,” said Bret Schafer, a so­cial me­dia an­a­lyst at the Al­liance for Se­cur­ing Democ­racy, a bi­par­ti­san na­tional se­cu­rity ad­vo­cacy group.

Schafer said so­cial net­works’ ef­forts to pro­tect their plat­forms and users have cre­ated a “layer of fric­tion” that makes it more chal­leng­ing to carry out mis­in­for­ma­tion cam­paigns. Ef­forts in­clude crack­ing down on “bad ac­tors” who use fake ac­counts to spread mis­in­for­ma­tion and re­quir­ing po­lit­i­cal ad­ver­tis­ers to ver­ify their iden­tity by pro­vid­ing a le­git­i­mate mail­ing ad­dress.

Face­book has de­vel­oped a mul­ti­fac­eted ap­proach to elec­tions in­tegrity. The com­pany has nearly dou­bled its se­cu­rity team ahead of the 2018 midterms and is tak­ing a more proac­tive role in iden­ti­fy­ing “co­or­di­nated in­au­then­tic be­hav­ior,” ac­cord­ing to spokes­woman Brandi Hoffine Barr.

“We now have more than 20,000 peo­ple work­ing on safety and se­cu­rity; we have put in place ad­vanced sys­tems and tools to de­tect and stop threats and de­vel­oped back­stops ... to help ad­dress any unan­tic­i­pated threats as quickly as pos­si­ble,” Hoffine Barr said.

Many of the com­pany’s ef­forts be­gin with de­tect­ing and re­mov­ing false ac­counts. In May, Face­book said it dis­abled nearly 1.3 bil­lion fake ac­counts in the first half of 2018. As those ac­counts are of­ten the source of false in­for­ma­tion on the site, Face­book said it’s com­bat­ing false news’s spread by re­mov­ing them.

Face­book also an­nounced in Oc­to­ber that it had re­moved 559 pages and 251 ac­counts for break­ing the plat­form’s rules for “spam and co­or­di­nated in­au­then­tic be­hav­ior,” which in­cludes cre­at­ing large net­works of ac­counts to mis­lead other users. On Face­book, that can look like peo­ple or or­ga­ni­za­tions cre­at­ing false pages or fake ac­counts.

Hoffine Barr de­scribed Face­book’s work as “con­tin­u­ous ef­fort” and said the com­pany isn’t work­ing in iso­la­tion.

“Ahead of the up­com­ing midterm elec­tions, we are work­ing closely with fed­eral and state elec­tions of­fi­cials, as well as other tech­nol­ogy com­pa­nies, to co­or­di­nate our ef­forts and share in­for­ma­tion,” she said.

Two weeks be­fore the midterms, Face­book un­cov­ered a dis­in­for­ma­tion cam­paign from Iran that at­tempted to sow dis­cord over hot-but­ton is­sues.

Twit­ter has also taken ac­tion against bad ac­tors, re­cently purg­ing ac­counts the com­pany had pre­vi­ously locked for “sus­pi­cious changes in be­hav­ior.” In an Oct. 1 blog post, Twit­ter ex­ec­u­tives de­tailed three “crit­i­cal” ar­eas of its ef­forts to pre­serve elec­tion in­tegrity.

The first, an up­date to Twit­ter’s rules, in­cludes ex­pand­ing what Twit­ter con­sid­ers a fake ac­count. The com­pany cur­rently uses a num­ber of cri­te­ria to make that de­ter­mi­na­tion, in­clud­ing whether the pro­file uses stolen or copied pho­tos and pro­vides in­ten­tion­ally mis­lead­ing pro­file in­for­ma­tion. The sec­ond cat­e­gory is de­scribed as “de­tec­tion and pro­tec­tion” and en­tails iden­ti­fy­ing spam ac­counts, as well as im­prov­ing Twit­ter’s abil­ity to ban users who vi­o­late poli­cies.

The most vis­i­ble ef­forts fall un­der “prod­uct de­vel­op­ments.” From giv­ing users con­trol over the or­der of their time­lines to adding an elec­tions la­bel for can­di­dates’ ac­counts, this cat­e­gory is all about help­ing users stay in­formed.

“We now have more than 20,000 peo­ple work­ing on safety and se­cu­rity.” Brandi Hoffine Barr Face­book spokes­woman

GETTY IM­AGES

Sil­i­con Val­ley com­pa­nies are on guard against dig­i­tal dis­in­for­ma­tion.

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.