Mis­in­for­ma­tion after the Las Ve­gas mas­sacre

The Week (US) - - 18 News -

In the hours after a tragedy, “ac­cu­racy mat­ters,” said in the Los An­ge­les Times. “Facts can help catch the sus­pects, save lives, and pre­vent a panic.” But in the af­ter­math of the Las Ve­gas mas­sacre, “the world’s two big­gest gate­ways for information,” Google and Face­book, re­peat­edly spread lies about the shoot­ing, steer­ing users to­ward fake news and con­spir­acy-laden fringe sites. Google’s Top Sto­ries box linked to a dis­cus­sion on 4chan, a no­to­ri­ously nox­ious on­line mes­sage board fre­quented by in­ter­net trolls, iden­ti­fy­ing the wrong as­sailant and falsely claim­ing he was an anti-Trump lib­eral. Face­book “per­pet­u­ated the same ru­mors,” link­ing to a site called “AltRight News” on its of­fi­cial Cri­sis Re­sponse page and pro­mot­ing a story that the shooter had been linked to ISIS. Google-owned YouTube pro­moted con­spir­acy videos sug­gest­ing the mas­sacre was a staged “false-flag” op­er­a­tion, said Sam Levin in The Guardian. Even after fam­ily mem­bers of those killed com­plained, YouTube ar­gued that the videos “did not vi­o­late its stan­dards.”

The plat­forms’ broad­cast­ing of lies about Las Ve­gas is “no one­off in­ci­dent,” said Kevin Roose in The New York Times. Over the past few years, ex­trem­ists and con­spir­acy the­o­rists have re­peat­edly “swarmed ma­jor news events, us­ing search-op­ti­mized ‘key­word bombs’ and al­go­rithm-friendly head­lines” to ma­nip­u­late re­sults that users of Google and Face­book see. Even when these fake-news cam­paigns are spot­ted and stopped, “they of­ten last hours or days— long enough to spread mis­lead­ing information to mil­lions of peo­ple.” And the tech com­pa­nies them­selves of­ten hes­i­tate to step in, hav­ing “largely ab­ro­gated the re­spon­si­bil­ity of mod­er­at­ing” their con­tent in fa­vor of al­go­rithms that pick and place head­lines and posts. This “au­to­ma­tion of ed­i­to­rial judg­ment,” com­bined with the com­pa­nies’ un­will­ing­ness to draw a dis­tinc­tion be­tween, say, CNN and fringe news sites, “has cre­ated a lop­sided bat­tle be­tween those who want to spread mis­in­for­ma­tion and those tasked with polic­ing it.”

“Blam­ing the al­go­rithm has got­ten pretty com­mon,” said Wil­liam Tur­ton in TheOut­line.com. Tech com­pa­nies want to pre­tend that their code for scrap­ing the web be­haves as an “au­ton­o­mous force.” That’s ab­surd. These al­go­rithms “are do­ing what they were de­signed to do. The prob­lem is that they are not de­signed to ex­clude mis­in­for­ma­tion or ac­count for bias.” It’s not enough for Google and Face­book to say that they will tinker with their code to avoid this mess in the fu­ture, said Alexis Madri­gal in TheAt­lantic.com. They have “to take re­spon­si­bil­ity for their ac­tive role in dam­ag­ing the qual­ity of information reach­ing the public.” That means hir­ing more hu­man mod­er­a­tors who will in­stantly know that 4chan sim­ply doesn’t qual­ify as a re­li­able source of facts. “There’s no hid­ing be­hind al­go­rithms any­more.”

Sep­a­rat­ing truth from lies

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.