Tracy King discusses the role of moderators in online discourse, and the lack of training and support they often receive
Tracy King discusses the role of moderators in online discourse.
He once had to ban another moderator for abusing their power
Last month I looked at in-game bullying and whether players are entitled to an environment they consider safe. I demonstrated that online bullying is real and sometimes has consequences for the bullied, but now I’d like to examine the consequences for the bully, and how people running an online service – games or social media – deal with them. Yep, I’m going to talk about moderators.
It’s an ugly word that conjures up images of authoritarians mad with power, passing down judgement on the little people at the whim of mood or too heavily interpreted rules. This stereotype exists because it can be true – I’ll never forget a particularly heavy-handed mod suspending anyone who used the word ‘nimrod’ as an insult. It was absurd, subjective and incredibly human. But for the most part, moderators only intervene when needed.
And that’ s the problem. Human scan be jerks. Some people want to watch the world burn – think of the public full-scale replica of Denmark built by the Danish government in Mine craft and ruined in hours, which is really funny but also awful if you think about it. But these people are usually in the minority.
When rules (or in Denmark’s case, settings) are in place to minimise anti-social behaviour, it usually has a chilling effect and the group as a whole behaves better than if no rules are in place. But then, for humans tending towards jerkiness, it only takes one moderator to overdo their behaviour and suddenly a fun, safe environment becomes a restrictive, oppressive one. One former employee of an MMO told me that, as a moderator, he once had to ban another moderator for abu sing their power.
The biggest MM Os employ paid moderators alongside trusted volunteer mods. I spoke to a couple of former and current mods of various games, and was surprised to learn how little training was involved, although one of them said his community was largely self-policing.
I couldn’ t find any reliable st at son what percentage of online gamers require intervention, but a 2016 Guardian report said 2 percent of reader comments under articles had been manually removed since 1999 (that’s 1.4 million individual comments). Also, The Guardian’s official position on community selfpolicing is that it doesn’t work: ‘Experience has demonstrated that disruptive commenters can derail, negatively impact or wreck conversations despite [the community’s] best efforts.’ One bad apple spoils the barrel, basically.
Where games do rely on community selfpolicing, there’s often an automated action in response to a certain number of reports, which itself can be abused if enough players maliciously report an innocent comment. Automation is a blunt instrument, which can’t (yet) understand context, such as whether a player is using a swear word to describe themselves, or an inanimate object.
Algorithms are only as smart as the people programming them, after all. But if automation isn’ t an optimal solution, that means actual people have to read abuse and decide if it’s sufficiently horrible to warrant action. That’ s a hello fa job, and one that takes a toll. Following a lawsuit earlier this year against Microsoft by community managers traumatised by their work, an exposé of the working conditions of Facebook content moderators showed similar issues, with a lack of training and support being the main complaints.
It’s clear that while people are going to be nasty online, and automation is both abusable and insufficient, human moderators will be necessary and the reality of that job is reading insults (not‘nimrod’) and abuse all day, every day.