Understanding the damage hate speech does
Pro Vice-Chancellor of the College of Humanities and Social Sciences, Massey University
Among many of the issues raised by the terrorist attack was the role of online platforms – did they react adequately and fast enough as the massacre unfolded? Had New Zealand authorities monitored online sites adequately? And should there be more regulation of sites, particularly with regard to hate speech? And have we paid as much attention as we should have to far-Right activists, groups and ideologies?
During the rallies in support of free speech in mid-2018, and in the wake of the debate about two far-right YouTubers from Canada, some supporters held up signs saying ‘‘Free Tommy’’. I wonder how many New Zealanders understand who Tommy is?
He is Tommy Robinson (real name Stephen Yaxley-Lennon), a far-Right British activist who helped form the English Defence League in 2009. In February this year, he was banned from Facebook and Instagram, following earlier bans from Instagram and PayPal. On Facebook, he had a million followers.
Facebook declared that Robinson had ‘‘repeatedly broken standards, posting material that uses dehumanising language and calls for violence targeted at Muslims’’. His admiration for Proud Boys, a US far-Right and violent nationalist group, attracted considerable attention, especially as most online platforms had already banned Proud Boys, and founder Gavin McInnes, on the grounds they contributed to hate speech.
The question is, do we have similar issues in New Zealand when it comes to hate speech online?
I suspect many might be surprised at the extent to which some New Zealanders are exposed to cyberbullying and hate speech. In the wake of the Christchurch mosque massacres, we should be asking specific questions about Islamophobic material. Local and international research shows there has been a significant spike in online hate speech since 2017 – and we should not allow it to become normalised. But first we need to understand the extent and impact of hate speech locally.
Recent research suggests that the insidious impact of hate speech is being felt in this country. Netsafe’s 2018 research indicates that three out of every 10 New Zealanders surveyed had encountered online hate speech that targeted someone else, while 11 per cent had been personally targeted by such speech.
ActionStation, in another survey last year, also found hate speech was an issue, especially for certain ethnic groups who were racially abused and harassed. Both surveys noted that young people in New Zealand were particularly affected. ActionStation found almost one in five teenagers had been affected negatively by unwanted digital communications.
This was reinforced by meetings through 2018, where the testimony of youth representatives relayed horrific examples of denigration and abuse online involving, or targeted at, young New Zealanders, sometimes with disastrous effects.
All this raises both the question of what qualifies as hate speech, and then what we should do about minimising public and private harm. On the first issue, some of the online platforms provide working definitions. For example, YouTube defines hate speech as ‘‘content that promotes violence or hatred against individuals or groups based on certain attributes’’ and then goes on to list the attributes.
In the case of Robinson, Facebook said, ‘‘When ideas and opinions cross the line and amount to hate speech that may create an environment of intimidation and exclusion for certain groups – in some cases with potentially dangerous offline implications – we take action.’’
It could be argued that these online platforms could do more and sooner, but they are now much more active in monitoring content. In New Zealand, we have a number of acts that are relevant, including the Crimes Act 1961 and the Human Rights Act 1993. Perhaps the most relevant is the Harmful Digital Communications Act 2015.
But, I would argue, it is not simply about legislation. It is about public awareness and public discussion so we understand what actually happens in our community and the impacts of hate speech on groups and individuals.
It is about speaking out and drawing attention to examples of hate speech and countering that which is inaccurate and hurtful. It is about supporting our public agencies as they monitor and reduce the public harm from hate speech.
And yes, we do need to debate the difference between free speech and hate speech. But hate speech is not free speech.
Professor Spoonley was due to give a free public lecture at Wellington’s National Library tonight, but this has been postponed due to security concerns.