Fake news proves dif­fi­cult to erad­i­cate


Nearly a year af­ter Face­book and Google launched of­fen­sives against fake news, they’re still in­ad­ver­tently pro­mot­ing it – often at the worst pos­si­ble times.

On­line ser­vices de­signed to en­gross users aren’t so eas­ily re­tooled to pro­mote greater ac­cu­racy, it turns out.

Es­pe­cially with on­line trolls, pranksters and more ma­li­cious types schem­ing to evade new con­trols as they’re rolled out.

In the im­me­di­ate af­ter­math of the Las Ve­gas shoot­ing, Face­book’s ‘‘Cri­sis Re­sponse’' page for the at­tack fea­tured a false ar­ti­cle mis-iden­ti­fy­ing the gun­man and claim­ing he was a ‘‘far left loon’’. Google pro­moted a sim­i­larly er­ro­neous item from the anony­mous prankster site 4chan in its ‘‘Top Sto­ries’' re­sults.

A day af­ter the at­tack, a YouTube search on ‘‘Las Ve­gas shoot­ing’' yielded a con­spir­a­cythe­ory video that claimed mul­ti­ple shoot­ers were in­volved in the at­tack, as its fifth re­sult. YouTube is owned by Google.

None of these sto­ries was true. Po­lice iden­ti­fied the sole shooter as Stephen Pad­dock, a Ne­vada man whose mo­tive re­mains a mys­tery. The at­tack on a mu­sic fes­ti­val left 58 dead and hun­dreds wounded.

The com­pa­nies quickly purged of­fend­ing links and tweaked their al­go­rithms to favour more au­thor­i­ta­tive sources. But their work is clearly in­com­plete – a dif­fer­ent Las Ve­gas con­spir­acy video was the eighth re­sult dis­played by YouTube.

En­gage­ment first

Why do these highly au­to­mated ser­vices keep fail­ing to sep­a­rate truth from fic­tion? One big fac­tor: most on­line ser­vices sys­tems tend to em­pha­sis posts that en­gage an au­di­ence – ex­actly what a lot of fake news is specif­i­cally de­signed to do.

That prob­lem is much big­ger in the wake of dis­as­ter, when facts are still un­clear and de­mand for in­for­ma­tion runs high.

Ma­li­cious ac­tors have learned to take ad­van­tage of this, says Mandy Jenk­ins, head of news at so­cial me­dia and news re­search agency Sto­ry­ful. ‘‘They know how the sites work, they know how al­go­rithms work, they know how the me­dia works,’' she says.

Get­ting al­go­rithms right

Break­ing news is also in­her­ently chal­leng­ing for au­to­mated fil­ter sys­tems. Google says the 4chan post that mis-iden­ti­fied the Las Ve­gas shooter should not have ap­peared in its ‘‘Top Sto­ries’' fea­ture, and it was re­placed within hours.

Out­side ex­perts say Google was flum­moxed by two dif­fer­ent is­sues.

First, its ‘‘Top Sto­ries’' is de­signed to re­turn re­sults from the broader web along­side items from news out­lets. Sec­ond, sig­nals that help Google’s sys­tem eval­u­ate the cred­i­bil­ity of a web page – for in­stance, links from known au­thor­i­ta­tive sources – aren’t avail­able in break­ing news sit­u­a­tions, says in­de­pen­dent search op­ti­mi­sa­tion con­sul­tant Matthew Brown.

‘‘If you have enough ci­ta­tions or ref­er­ences to some­thing, al­go­rith­mi­cally that’s go­ing to look very im­por­tant to Google,’' Brown said. ‘‘The prob­lem is an easy one to de­fine but a tough one to re­solve.’'

More peo­ple, fewer ro­bots

United States law cur­rently ex­empts Face­book, Google and sim­i­lar com­pa­nies from li­a­bil­ity for ma­te­rial pub­lished by their users. But cir­cum­stances are forc­ing the tech com­pa­nies to ac­cept more re­spon­si­bil­ity for the in­for­ma­tion they spread.

Face­book said last week that it would hire an ex­tra 1000 peo­ple to help vet ads af­ter it found a Rus­sian agency bought ads meant to in­flu­ence last year’s elec­tion. It’s also sub­ject­ing po­ten­tially sen­si­tive ads, in­clud­ing po­lit­i­cal mes­sages, to ‘‘hu­man re­view.’' – AP


A day af­ter the at­tack, a YouTube search on ‘‘Las Ve­gas shoot­ing’' yielded a con­spir­acy-the­ory video that claimed mul­ti­ple shoot­ers were in­volved in the at­tack as the fifth re­sult.

Newspapers in English

Newspapers from New Zealand

© PressReader. All rights reserved.