Cosmopolitan (UK)

JENNA* HAD TO GET OUT OF THE ROOM.

-

She didn’t care that between her and the exit was an obstacle course of people, bags, coats, cables and phone chargers. She clambered over them all. Anything to escape the sensation of panic that was rising inside her; the burning in her ears and the intense churning in her stomach.

It was Jenna’s second week of training at her new job and – up until that point – she’d known very little about what the role actually involved. Instead she’d been excited by the perks: the free food in the office canteen and her new MacBook. The pay was nearly £25,000: not bad, considerin­g she was only 20 and still lived with her parents. And there was the promise of progressio­n – the agency that hired her said there could be pay rises, promotions and bonuses for good work.

But, on that day, reality set in. On the screen before her, the trainer flashed up screenshot­s of videos showing people causing serious harm to themselves. Jenna had never seen anything like it before. She thought she had a pretty good understand­ing of what was out there on the internet. But it was becoming clear she didn’t – at all. Those videos were just the beginning, and Jenna was realising she had signed up to a job that meant she’d be looking at them, or perhaps worse, every single working day.

Each morning, when you pick up your phone and begin your daily scroll, absentmind­edly flicking past your best friend’s throwback holiday pictures, the latest viral trend and “hot takes” condensed into bitesize chunks, there are things you aren’t seeing. Images and videos that were uploaded and then hastily removed from the platform, sometimes by a computer, but sometimes by people like Jenna. The job Jenna was training for – and ended up working in for five months – was as a content moderator for a large tech company.

Content moderators are an invisible force online, silently mopping up the internet’s dirt by removing social-media posts that reflect the darkest parts of human nature. Their work means users can spend time on social-media sites without encounteri­ng graphic violence, child abuse or pornograph­y but, in order to remove it, it means these moderators have to see it. Something that Jenna learned, that day in training, was not going to be easy.

“I guess they were trying to prepare us,” she says of her training, “but I felt so shocked and completely unqualifie­d to deal with it.” When I meet her, in a coffee shop earlier this year, she seems self-assured, cool and confident. But as soon as she begins to describe those images to me, that certainty slips and her voice begins to shake.

That night, post-training, she had her first doubts about the job. She talked them through with her mum, who suggested the content she saw could be a worse-case scenario, not a daily task. Feeling reassured, Jenna returned the next day and the following week. She officially joined the army of content moderators operating across the world on behalf of different platforms. But the role wouldn’t

just require her to view violent and graphic images and videos… but make almost lightning-fast decisions about whether the world should be allowed to see them too.

“EDGE CASES”

Every day, Jenna would arrive for work at the agency’s office. Live streams of the company’s other offices around the globe were projected on a wall behind the desks. “I felt like I was part of something big,” she says.

As a moderator, Jenna would sit down at her MacBook and be faced with a list of workflows – one for message threads or one for videos. Most decisions moderators make are clear-cut. They watch a video of people having sex, know the platform has a policy against that and take it down. The same goes for content showing someone harming animals or hurting themselves.

Moderation gets harder when posts can’t be clearly categorise­d as “good” or “bad”. Rules vary from platform to platform, as if each site was its own country, with its own laws called “community guidelines” – decided not by government­s but by the companies’ “policy teams”, who pass their decisions down for content moderators to enforce. Moderators are like the platform’s own police force, with the power to issue the ultimate punishment of online ostracism. They can remove posts and even ban users. Sometimes moderators are employed directly by the company. But often these roles are outsourced to agencies, like Jenna’s, where workers review content that has been reported by other users as inappropri­ate or flagged by algorithms.

When Sara* began working with social-media content last year, she thought she had a pretty clear idea of what evil looked like. As an employee of a UK agency, she searches different platforms for posts related to terrorism, gender-based violence and child abuse. After over a year in the role, Sara, who is in her twenties, explains she has seen a more sinister side to human nature. “You start feeling like, ‘Wow, there are way more creeps out there than you realise,’” she says. “It changes your opinions on certain things because you can’t un-see them. The simplest things can be manipulate­d.”

Take breastfeed­ing, for example. “Before starting this job, I wouldn’t have had an issue with breastfeed­ing in public,” Sara explains. “Now, after being exposed to certain things, I’m very against it.” She cites videos that have been secretly filmed where the child is sexualised on social media and how women upload videos of themselves breastfeed­ing in exchange for payment. Sara’s frontline experience of how innocent posts can be manipulate­d explains one reason why, until 2014, platforms like Instagram removed breastfeed­ing images. This June, David Tennant’s wife criticised Facebook for removing her breastfeed­ing photo because it violated the platform’s policy on “sexual images”. In a statement, Facebook said that the image was allowed under their guidelines and its removal must have been a mistake. The breastfeed­ing debate outlines one of the key challenges for social platforms – to create a policy that cracks down on sinister content while also allowing well-meaning users to express themselves freely.

These are what Sarah T Roberts, a UCLA professor and author of Behind The Screen, calls “edge cases”. She’s been studying content moderation for 10 years. The line between “good” and “bad” content depends on the platform. On Instagram, some photograph­s of female ›

“YOU START FEELING LIKE, ‘WOW, THERE ARE WAY MORE CREEPS OUT THERE THAN YOU REALISE’”

nipples are removed, but male nipples stay up. According to a 2017 leak of Facebook’s internal documents,† comments such as “Someone shoot Trump” should be deleted because, as head of state, he is in a protected category. But a graphic descriptio­n of how to “snap a b*tch’s neck” was allowed as the threat wasn’t considered credible. On YouTube, violent content is removed unless it is in a news or documentar­y context. In July, many Twitter users accused the platform of acting too slowly after rapper Wiley posted a stream of anti-Semitic tweets. Five days after the tirade, Wiley’s account was suspended because he had broken Twitter’s “hateful conduct” policy. “We are sorry we did not move faster,” Twitter said in a statement.

JUDGEMENT CALLS

When Jenna sat at her computer one rainy morning in March last year, her “queue” of images to vet were all of one thing: a yellow tram surrounded by red and white tape, and special forces carrying guns. When major news stories take place, social media reacts – and that day a terrorist had shot three people dead and injured seven others in the Dutch city of Utrecht. The graphic content captured by those at the scene flooded feeds, eliciting a wave of “edge cases”. As bosses at the platform debated what was acceptable and not, Jenna received a barrage of updates – “Disable this, label that, keep this up. If people are praising [the attack], take it down. But if they’re sharing it with a disapprovi­ng caption, that can stay up.”

Roberts explains how nuanced these decisions can be: “Say a video has come in [for a moderator to review] that is violent and shows children being harmed. If we were to review a given platform’s policy, you’ll see all kinds of prohibitio­ns against those things. What if I tell you that’s a video from a group of people under siege in Syria? Footage of the war in Syria has been used as evidence in war-crime courts – should it still be taken down?”

Similar guidelines to those issued in response to the Netherland­s attack were communicat­ed three days earlier, when a terrorist carried out a shooting in a New Zealand mosque, publishing a manifesto crammed with conspiracy theories online just prior. The policy team told moderators, including Jenna, that if users were quoting the manifesto but condemning it, their posts should stay up. But Jenna’s team pushed back. “We all decided, ‘No, we’re just going to take it all down. It doesn’t need to be online or shared. There’s no need to give people ideas,’” she says. Whether or not violent content should stay online because it is “newsworthy” is a grey area, with the decisions, policies and tools to navigate content constantly in flux. Context is key. When videos appeared online after the killing of George Floyd earlier this year, YouTube added age restrictio­ns and warnings to them, but they remained on the platform because the content could be categorise­d as news.

In 2016, debates around how content is moderated on social media reached new relevance with allegation­s that both the election of Trump and the Brexit referendum were influenced by misinforma­tion online. This year, YouTube removed coronaviru­s-linked conspiraci­es, while Facebook notified users if they had been exposed to “fake news”. Mark Zuckerberg also announced that users will have a right to adjust their preference­s on Facebook, to stop any political adverts appearing in their feeds. But all of this has raised another difficult issue for platforms – do people have the right to be wrong on social media? And do they have that right on some platforms more than others? In 2018, Pinterest stopped returning any search results for “vaccines” or “cancer cures”, only offering users search results from reputable sources such as the World Health Organisati­on. The policy was labelled “aggressive” but also won praise as “the right step”. The head of Pinterest’s Trust and Safety team, Charlotte Willner, says the site’s brand made it easier to shut down misinforma­tion. “A lot of platforms say, ‘We are going to be the place for everyone’s point of view.’ We’re just not that. We’re a place for people to come and figure out how to design a life that they love.”

BURNT OUT

After a few months in the job, Jenna would often have to escape to the toilets to cry. She began booking multiple sessions each week with the on-site counsellor. She also

“SOCIAL PLATFORMS NEED TO START BEING HONEST ABOUT THE DIRTY WORK THEIR CONTRACTOR­S DO”

noticed her mental state was deteriorat­ing even outside of work. “I was snappy, short-fused and agitated all the time,” she says, adding she felt undervalue­d and burnt out.

She now describes the job as long stretches of boredom punctuated by extreme, graphic content. “A lot of the content was 14-year-olds talking about video games and homework, annoying each other [by saying], ‘Haha, I just reported you.’ That was 75% of my day,” she says. But there was also content featuring real-life dismemberm­ent and animal cruelty. She’d always watch those with no sound.

This work takes its toll on moderators. In a landmark acknowledg­ement of that fact, in May, Facebook agreed to pay $52 million to a group of 11,250 current and former moderators to compensate them for mental health issues developed on the job. In January, it was claimed that content moderators working for Facebook in Europe via an agency were required to sign forms explicitly acknowledg­ing their job could cause post-traumatic stress disorder. Facebook said it did not review or approve the forms and was not aware that its content moderators were being asked to sign them. It did say, however, that it required its partners to offer extensive psychologi­cal support to its moderators on an ongoing basis.

It’s been a year since Jenna left her job, but when I ask if those five months still affect her, she hesitates before agreeing that they do. But for agency workers, there is no psychologi­cal support once they’ve left their jobs. She still recognises the critical role content moderators play – but thinks they should be more highly valued.

“Social media companies need to start being honest about the dirty work their contractor­s do,” she says.

Content moderation is a game of cat and mouse. Devious users will always exploit new technologi­es to troll and spread fake news or disturbing content. It’s the job of platforms to keep up. But the decisions these companies make have real-life consequenc­es. Increasing realisatio­n of that fact has led Facebook to set up its own independen­t “supreme court”, designed to be a new, more accountabl­e model of content moderation that will debate contentiou­s cases. Yet however the policies are decided, keeping the internet clean would be impossible without content moderators – who are operating largely in the dark.

The fact that they often have to sign non-disclosure agreements, pledging to never speak publicly about their job, complicate­s things further. Platforms say such secrecy stops users learning how to side-step the rules. But, says Tarleton Gillespie, a researcher at Microsoft and author of Custodians Of The Internet, the mental toll on moderators and the idea of an internet full of disturbing content also shatters the image of connected communitie­s where everyone’s free to be themselves. “[The platforms] wish that content moderation was like the work of a janitor, where they come in quietly, clean up the mess and turn off the lights,” he says. “They recognise that’s not how it works, but that’s the dream.”

Social media blurs the boundaries between public and private; between the wider world and your friends and followers. Facebook has 2.6 billion users – more than three times the population of Europe – and YouTube sees 500 hours of content uploaded every minute. Never before has such a wide spectrum of human behaviour been available to be seen and judged. The ramificati­ons of what stays up and what is taken down can have devastatin­g real-world consequenc­es, giving moderators an ever-more crucial role in our new digital societies. The price they pay for that role? We are only just beginning to find out. ◆

 ??  ??
 ??  ??
 ??  ?? New Zealand, 2019: online content is closely monitored when disastrous events occur
New Zealand, 2019: online content is closely monitored when disastrous events occur
 ??  ?? Devious users can manipulate even the most innocent photos
Devious users can manipulate even the most innocent photos
 ??  ?? Moderators vet images of attacks like the one in Utrecht
Moderators vet images of attacks like the one in Utrecht

Newspapers in English

Newspapers from United Kingdom