Fake news proves difficult to eradicate
Nearly a year after Facebook and Google launched offensives against fake news, they’re still inadvertently promoting it – often at the worst possible times.
Online services designed to engross users aren’t so easily retooled to promote greater accuracy, it turns out.
Especially with online trolls, pranksters and more malicious types scheming to evade new controls as they’re rolled out.
In the immediate aftermath of the Las Vegas shooting, Facebook’s ‘‘Crisis Response’' page for the attack featured a false article mis-identifying the gunman and claiming he was a ‘‘far left loon’’. Google promoted a similarly erroneous item from the anonymous prankster site 4chan in its ‘‘Top Stories’' results.
A day after the attack, a YouTube search on ‘‘Las Vegas shooting’' yielded a conspiracy-theory video that claimed multiple shooters were involved in the attack, as its fifth result. YouTube is owned by Google.
None of these stories was true. Police identified the sole shooter as Stephen Paddock, a Nevada man whose motive remains a mystery. The attack on a music festival left 58 dead and hundreds wounded.
The companies quickly purged offending links and tweaked their algorithms to favour more authoritative sources. But their work is clearly incomplete – a different Las Vegas conspiracy video was the eighth result displayed by YouTube.
Engagement first
Why do these highly automated services keep failing to separate truth from fiction? One big factor: most online services systems tend to emphasis posts that engage an audience – exactly what a lot of fake news is specifically designed to do.
That problem is much bigger in the wake of disaster, when facts are still unclear and demand for information runs high.
Malicious actors have learned to take advantage of this, says Mandy Jenkins, head of news at social media and news research agency Storyful. ‘‘They know how the sites work, they know how algorithms work, they know how the media works,’' she says.
Getting algorithms right
Breaking news is also inherently challenging for automated filter systems. Google says the 4chan post that mis-identified the Las Vegas shooter should not have appeared in its ‘‘Top Stories’' feature, and it was replaced within hours.
Outside experts say Google was flummoxed by two different issues.
First, its ‘‘Top Stories’' is designed to return results from the broader web alongside items from news outlets. Second, signals that help Google’s system evaluate the credibility of a web page – for instance, links from known authoritative sources – aren’t available in breaking news situations, says independent search optimisation consultant Matthew Brown.
‘‘If you have enough citations or references to something, algorithmically that’s going to look very important to Google,’' Brown said. ‘‘The problem is an easy one to define but a tough one to resolve.’'
More people, fewer robots
United States law currently exempts Facebook, Google and similar companies from liability for material published by their users. But circumstances are forcing the tech companies to accept more responsibility for the information they spread.
Facebook said last week that it would hire an extra 1000 people to help vet ads after it found a Russian agency bought ads meant to influence last year’s election. It’s also subjecting potentially sensitive ads, including political messages, to ‘‘human review.’' –