Social media code of conduct
Social media giants including TikTok and Meta will soon become more accountable for the content on their platforms in Aotearoa, voluntarily signing up to a code of conduct led by Netsafe, which targets issues such as hate speech and misinformation.
The Aotearoa Code of Practice for Online Safety and Harms obligates the companies – Meta (Facebook and Instagram), Google (YouTube), TikTok, Amazon (Twitch) and Twitter – to actively reduce harmful content on their digital platforms and services in New Zealand.
If the public believes a company has breached the new code, they will be able to make complaints, which could result in sanctions, including being asked to leave the agreement.
Each company will also be required to publish annual reports about its progress in adhering to the code, which focuses on their systems, policies, processes, and tools to reduce the spread of harmful content, rather than replacing current legislation or regulations.
The seven themes covered include child sexual exploitation and abuse, bullying or harassment, hate speech, incitement of violence, violent or graphic content, misinformation and disinformation.
Netsafe chief executive Brent Carey described the code as a ‘‘world first’’.
‘‘Having this code, which is filling some regulatory gaps, is a good first step to try to address some of these emerging issues, especially around hate speech, misinformation and disinformation,’’ Carey said.
The 2019 live-streamed Christchurch mosque shootings, the subsequent Christchurch Call,
and the 2022 occupation around Parliament grounds, partly driven by social media misinformation, all played a role in getting the changes.
The New Zealand code comes as countries around the world start to enact legislation to curb such internet behaviour in what Curtis Barnes, from technology law and policy specialists Brainbox, said was a ‘‘seismic change’’ in how the internet worked.
New Zealand was also getting the ball rolling on legislative changes and this voluntary code would sit alongside those, he said.
The creation of the code previously drew criticism from several groups including Tohatoha NZ, InternetNZ and the Inclusive Aotearoa Collective Ta¯ hono, claiming a lack of significant community engagement could see it fail the people it intended to help.
‘‘We remain disappointed with the process to get here, which started with online services rather than communities,’’ InternetNZ interim chief executive Andrew Cushen said yesterday.
Tohatoha chief executive Mandy Henk said the code looked like a social media company-led effort to ‘‘claim legitimacy without having done the work to earn it’’.
‘‘In our view, this is a weak attempt to pre-empt regulation – in New Zealand and overseas – by promoting an industry-led model that avoids the real change and real accountability needed.’’
In the next stage, NZTech, as the administrator of the agreement, will design how it will work and how the public can complain.
NZTech chief executive Graeme Muller said the code would be amended biannually.
‘‘We hope the governance framework will enable it to evolve alongside local conditions, while at the same time respecting the fundamental rights of freedom of expression.’’
The agreement builds on other international codes of practice such as the EU Code of Practice on Disinformation, the EU Code of Conduct on Countering Illegal Hate
Speech Online and the Australian Code of Practice on Disinformation and Misinformation. Last year, Netsafe revealed reports of harmful digital communication had increased 24% in a year and recent research showed one in five adults and twice as many young people in NZ received a digital communication that negatively impacted their life in 2020.