Stopping the unstoppable
How does the chief censor cope when hundreds of potentially harmful videos are uploaded every day? Katie Kenny reports.
On Wednesday, October 9, two people died and more were injured in an antisemitic attack in the eastern German city of Halle.
Chief Censor David Shanks in Wellington learned of the attack at 6am the following day. It was a copycat of New Zealand’s March 15 terror attack, when an alleged white supremacist opened fire in two Christchurch mosques, killing 51 worshippers while broadcasting live on Facebook.
For the second time in just over six months, Shanks would find himself fronting media on issues relating to terrorist and violent extremist content online.
The German shooter’s platform of choice was streaming site Twitch, known for its video game content. He apologised to his viewers when he was arrested after failing to enter a synagogue where up to 80 people had gathered for Yom Kippur, the holiest day of the year in Judaism.
Twitch confirmed about five people watched the livestream in real time and thousands of others saw it before it was flagged and removed. While it was still circulating on darker corners of the internet, it wasn’t easily found on the bigger social media platforms.
That was in contrast to the video of the Christchurch attack, which by any definition of the term went viral. Users attempted to re-upload it 1.5 million times on Facebook. YouTube at one point was removing one copy of it per second.
On March 20, Shanks classified the Christchurch video as objectionable because of its depiction and promotion of extreme violence and terrorism – meaning it’s illegal for anyone in New Zealand to view, possess, or distribute it. Three days later, he also banned a document, or manifesto, said to have been written by the terrorist.
That Thursday morning after the German attack, Shanks and several classification officers watched the Halle video. Reporters were already asking if he’d ban it.
By 11.30am, he made the call. ‘‘While this video is not filmed in New Zealand and fatalities are fewer than in Christchurch, the fundamentals of this publication are the same as that of the March 15 livestream,’’ he said in a statement. ‘‘It appears on the face of it to be a racially motivated terrorist attack depicting cold-blooded murder of innocent people.’’
An old model for a new age
In 1915, a conference of representatives of 45 organisations called for the introduction of a censorship system. They claimed: ‘‘The class of moving pictures at present exhibited in New Zealand constitutes a grave danger to the moral health and social welfare of the community.’’
The first film censor was appointed the following year. He snipped naughty bits from magazines and banned some books entirely.
The Office of Film and Literature Classification was established as an independent Crown entity under the Films, Videos, and Publications Classification Act 1993.
‘‘In 1993, the idea of the internet and what it could become was just a twinkle in the legislator’s eye,’’ Shanks says. Then, ‘‘everything was physical’’: tapes, books, magazines. ‘‘Fast-forward to 2017 and the universe is fundamentally changed in terms of how people consume and conceive of, and market and provide, media. When I came into the role [that year] I know I’d need to match the framework against the reality. ‘‘I think about this role as fundamentally about being a media regulator, who has a responsibility to keep people safe from harm and also to protect people’s freedoms.’’ Shanks has a background in legal roles and came to the job from being in charge of health, safety and security at the Ministry of Education. He was thrust immediately into the limelight over the controversial Netflix series 13 Reasons Why. The programme, targeted at teenagers, addresses or depicts rape, suicide, drug use, and bullying. It was easily accessible for young people to watch unsupervised via the Netflix streaming service. Shanks introduced a new classification for the show: RP18. This meant anyone under 18 should only watch the programme with the support of an adult to process the topics raised in the series.
But 13 Reasons is, in one respect, not typical of the kinds of potentially harmful content viewed by young people in 2019.
‘‘We know from our research on young people that a large amount of their content is not from cinema or TV or even streamed services. It’s YouTube or other similar free tubes,’’ Shanks says.
‘‘If you think about that as an example, [YouTube’s] current stats are about 500 hours of content going up every minute. There is no sensible way you can have human moderation of classification of tubes generating that amount of content.’’
Whereas Shanks could see the second season of 13 Reasons coming, and speak with Netflix about its release, there is no way censors could know where the next white supremacist meme is coming from.
Canterbury University sociolologist Michael Grimshaw points to the banning of the alleged Christchurch shooter’s so-called manifesto as further evidence of the problem.
‘‘The aim of banning manifestos worked when you could shut down the means of publication and also shut down the means of distribution; that is, in the world of physical media,’’ he says.
Now, documents circulate independently and can contain many embedded links which make it much more than a single document.
‘‘So every manifesto is a multiplicity of parts that can be divided up and circulated, and so the model is not up to date,’’ Grimshaw says.
This is where digital solutions, such as artificial intelligence (AI) that finds and flags up dangerous content, enter the conversation.
Big platforms like YouTube and Facebook are already using AI to identify and remove extremist content, pornography or other types of material. Facebook last month announced a range of measures to better clamp down on violent extremists, terrorists and hate groups on its platforms. This includes using first-person military videos to train artificial intelligence to faster identify terror attacks like the live-streamed Christchurch massacre.
The Office of Film and Literature Classification is developing a tool of its own. It’s essentially a filter for New Zealand’s sensitivities, applied over the top of the self-classification