BALANCING CENSORSHIP AND RESPONSIBILITY
After a white nationalist slaughtered 50 Muslims in New Zealand, Margaret Sullivan, media critic of The Washington Post, posed this question to the digital platforms used by the assassin to spread his murderous message: “Where are the lines between censorship and responsibility?”
Those platforms — YouTube and Facebook, Twitter and Reddit — must now answer that question with clarity and candor, because their role in the massacre is undeniable. As Neal Mohan, YouTube’s chief product officer, told the Post: “This was a tragedy that was almost designed for the purpose of going viral.”
The shooter was, in effect, playing a deadly video game, live-streaming his attack while encouraging his followers to reproduce and repost the images of carnage faster than social media platforms could remove them. The platforms tried; Facebook blocked more than 1 million instances of the 17-minute clip in the first 24 hours, but they were hopelessly outmanned.
The internet did not create white nationalism or anti-Muslim fervor. And digital tools are used every day for countless positive purposes. But as New Zealand damnably demonstrates, social media platforms are highly vulnerable to corruption and abuse. Facebook, YouTube and the rest are not merely common carriers like the phone company, neutral pipes transmitting any and all information. They constantly make editorial and ethical decisions that influence what consumers are exposed to, so the question is how those decisions are made and what standards are used. What is the proper balance between responsibility and censorship?
As journalists who cherish the First Amendment, we always tilt against censorship. Social media outlets — let alone the federal government — should not be the ultimate arbiter of what people know and learn.
One area where social media companies must improve, however, is crisis management. Even ardent civil libertarians admit that when words and images present a “clear and present danger,” when they threaten to unleash immediate violence, society has an obligation to protect itself and contain that danger.
When the New Zealand shooter’s videos started cascading through the internet, platforms relied on a combination of artificial intelligence and human moderators to thwart their spread, and they failed miserably. Facebook didn’t even know the original video had been posted on its site until local police told them about it.
But crisis management is only a small part of the problem. A much deeper issue facing digital platforms is the way they encourage and enable radicalization online.
As users explore a topic, algorithms crafted by the platform suggest new videos that draw them deeper into “rabbit holes” of twisted and tendentious ideologies. The goal is profit. Keep viewers watching, increase the time they spend online and maximize ad revenue.
But this relentless pursuit of eyeballs and earnings has devastating side effects. Not only do users see and absorb increasingly extremist ideas, they bond online with others who are drawn into the same vortex of hate and violence.
Here’s where the balance between censorship and responsibility must swing toward responsibility.
If those platforms don’t act on their own, society will fight back in the form of onerous rules and regulations that restrict free speech. The only way to avoid censorship is to accept responsibility.