The Chronicle

Hate has a head start

Facebook is running way behind

- NICK WHIGHAM

SOCIAL media companies Facebook, Google and others have been slammed in the wake of the Christchur­ch massacre for failing to stop the spread of violent footage posted by the shooter.

Pressure is mounting on them to do more after the terrorist’s video quickly spread across the internet but former tech employees say it’s not going to get any better.

Facebook said it removed 1.5 million videos of the New Zealand shootings including 1.2 million that were blocked from being posted.

That implies 300,000 versions of the video were available to watch for at least a short time before Facebook managed to pull them down.

For hours after the attack, the video circulated on other content sharing sites.

Prime Minister Scott Morrison demanded that tech giants provide assurances that they would prevent attacks from being shown online, suggesting live streaming services could be suspended.

Criticism has come from all corners, but serious questions remain about whether these sites can reliably be tasked with preventing another horrific live streamed video from being so widely circulated again.

These companies use a combinatio­n of algorithms, human workers and user reporting to police content. But given the huge volume of postings during an event like Christchur­ch it is currently an impossible task to block everything in real time.

Alex Stamos is a computer scientist and the former chief security officer at Facebook. The day after the massacre he took to Twitter to lament the immense difficulty faced by a company like Facebook when so many users willingly post the violating footage.

“Millions of people are being told online and on TV that there is a video and a document that are too dangerous for them to see, so they are looking for it in all the normal places,” he said.

Even if the company’s filtering systems were bulletproo­f, questions still remain about what should be allowed for legitimate reporting purposes and how to differenti­ate, he wrote.

In short, “It isn’t going to get a lot better than this”. In fact, it will likely get worse.

Others were quick to point out that recent changes announced by Facebook CEO Mark Zuckerberg to introduce encrypted messaging and ostensibly boost privacy on the platform will limit the company’s ability to pull down infringing content.

“End-to-end encryption prevents anyone – including us – from seeing what people share on our services,” Zuckerberg said this month.

Tech firms have long struggled to balance their ethos of supporting free speech with the need to remove and prevent the spread of terrorist content.

In 2016, Google, Facebook, Twitter and Microsoft announced they had teamed up to create a database of unique digital fingerprin­ts known as “hashes” for videos and images that promote terrorism.

Known as perceptual hashing, it means when one company takes down a piece of violating content, other companies can also use the hash to identify and remove the same content.

But like other systems designed to improve content moderation, it is imperfect and is beholden to the never-ending game of cat and mouse when users are intent on sharing content.

And it’s a problem that doesn’t look like going away any time soon.

 ?? Photo: iStock ?? RAPID SPREAD: Tech firms such as Facebook have struggled to handle the spread of violent content.
Photo: iStock RAPID SPREAD: Tech firms such as Facebook have struggled to handle the spread of violent content.

Newspapers in English

Newspapers from Australia