Battling everyday hatred online
On Friday, two days before the six-month anniversary of the March 15 terror attacks, Prime Minister Jacinda Ardern was asked if so-called everyday New Zealanders cared about the Christchurch Call. She replied: ‘‘I think New Zealanders cared about the harm that was done by that video, and we’ve identified the best way we can deal with that is via this vehicle.’’
It was a sensible response to a not entirely serious question. When she was asked if it was cynical to claim, as Opposition leader Simon Bridges had, that everyday New Zealanders don’t care about the Christchurch Call, she said, ‘‘I think it’s for people themselves to determine that. It’s pretty hard to make those calls on other people’s behalfs.’’
A lot has been written since last week about the phrase ‘‘everyday New Zealanders’’, the kind of signal it may have been intended to send and whether it popped into Bridges’ head or was workshopped by strategists. It is not worth re-examining those questions. But it is worth reiterating that using the terror attacks as a political football is a highly risky and possibly distasteful manoeuvre that could easily backfire on National.
One of the risks Bridges could not have foreseen is that the Christchurch Call, a Paris summit in May that he has airily dismissed as a ‘‘nebulous, feel-good’’ talkfest, is already bearing fruit. Facebook has announced ‘‘a series of updates and shifts’’ that will combat hateful, extremist content on both Facebook and Instagram.
While some of the changes pre-date it, and even the terror attacks, the Christchurch Call has ‘‘strongly influenced’’ the tech giant’s thinking, according to its media statement. Restrictions on Facebook Live, which the alleged Christchurch gunman used to broadcast video of the attacks, were introduced in May. Facebook’s media release acknowledges that the targeting of white supremacist groups and individuals had lagged behind its emphasis on Islamic terrorism.
Facebook has also revealed that since March it has been actively intervening in the online behaviour of people in the US who search for white supremacist topics, by redirecting them to Life After Hate, an online resource founded by former extremists. This tactic is now being rolled out to Facebook users in Australia and Indonesia, who will be directed to similar resources in their own countries. Facebook is looking for an equivalent partner in New Zealand.
There may be a school of thought that argues that Facebook’s intervention in the online activities of users is tantamount to censorship. There was a justified uproar when it was revealed that Facebook had conducted mood experiments on users. A key difference is that the anti-terror measures are not conducted in secret. Rather than a thought control experiment, Facebook’s limiting of searches for terrorist material is closer to the responsible curation and presentation of content any mainstream media company should aim to practise. Other tech companies, like Youtube and Twitter, should follow suit.
Much is still unknown about how hate and extremism spread online, and there are many other platforms where it flourishes. As shown in Stuff Circuit’s recent documentary Infinite Evil, the obscure 8chan website rapidly became central to white supremacy. When one site closes or becomes more controlled, the hate can simply migrate elsewhere. But Facebook’s actions are a useful step forward that most New Zealanders, everyday or not, would welcome.
One of the risks Bridges could not have foreseen is that the Christchurch Call ... is already bearing fruit.