Who are the custodians of the internet — the men, women, and algorithms who determine the content we see on social media? What are they hiding and removing, and what do they base their judgments on? The answers are rarely, and never thoroughly, forthcoming from the social media companies themselves. “When they acknowledge moderation at all, platforms generally frame themselves as open, impartial, and noninterventionist — in part because their founders fundamentally believe them to be so, and in part to avoid obligation or liability,” writes Tarleton Gillespie, a principal researcher at Microsoft Research, New England, in his scholarly new book
But, Gillespie argues, content moderation is not an auxiliary feature of social media platforms. Instead, “moderation is, in many ways, commodity that platforms offer.” It is “a key part of what social media platforms do that is different, that distinguishes them from the open web: they moderate (removal, filtering, suspension), they recommend (news feeds, trending lists, personalized suggestions), and they curate (featured content, front-page offerings).” Understanding moderation, then, is essential not just to understanding why a particular Instagram post was taken down. It is essential to discerning how what we are allowed to read and publish online is determined. Besides their effects on information sharing, those determinations have rippling effects across social norms, values, and politics. “Platforms may not shape public discourse by themselves, but they do shape the scope of public discourse,” Gillespie writes. “And they know it.”
All stakeholders in content moderation — from the Silicon Valley companies who set the rules to the end user who comes across a personally offensive post — have an all-but-impossible task. (The stakeholders who have the worst of it are undoubtedly the men and women around the world whose job it is to decide in mere seconds whether, for instance, a post is describing sexual violence or condoning it.) At every step of the moderation process, judgments must be made about what is acceptable versus what is pornography, hate speech, obscenity, abuse, or the promotion of self-harm; the boundaries must be reviewed and perhaps redrawn each time a user introduces another potentially actionable offense. Rules and standards must be applied to topics that have greatly varied cultural values.
Gillespie takes it as a given that social media platforms need content moderation, considering the proliferation of horrors and harms that would otherwise unfold. The question, then, is not whether platforms should moderate, but how they can do it best, or at least in the least objectionable way. Because the companies’ moderation processes are not public, a complete review of the hows and whys is impossible. But Gillespie works around this significant hurdle by thoroughly researching