South China Morning Post

Twitter under Musk may be heading down a dark road

Mohammed Sinan Siyech says firing those responsibl­e for content moderation is worrisome

- Mohammed Sinan Siyech is a doctoral candidate at the Islamic and Middle East Studies Department at the University of Edinburgh and a non-resident associate fellow at the Observer Research Foundation, New Delhi

News cycles dedicated to Elon Musk’s troubled takeover of Twitter and his subsequent mission to trim layers of the workforce have dominated discussion­s in the tech space. After Musk took control of the platform late last month, more than half of Twitter’s 7,500 global workers have lost their jobs or resigned due to new policies early this month.

In addition, an estimated 4,400 out of Twitter’s 5,500 contractor­s were fired. While most of these contract workers focused on aspects such as engineerin­g, real estate and marketing, some of them were involved in the crucial work of content moderation.

Content moderators are among the most essential workers in any major social media platform: not just Twitter, but also companies such as Facebook and TikTok. Content moderators are responsibl­e for pulling inappropri­ate or graphic content such as videos of suicide, hate speech, violence, pornograph­y and disinforma­tion and reviewing content reported for various violations of company policy.

In this sense, moderators are an invisible army essential for protecting Twitterati and other social media users from the depths of human depravity. Given the real-world impact of social media, content moderators also play a vital role in maintainin­g peace and reducing hate speech and hate crimes.

It is their job to pull disinforma­tion during election cycles and to remove videos posted by terrorist groups. But the violent nature of the videos reviewed has resulted in content moderators experienci­ng symptoms of post-traumatic stress disorder or feelings of isolation, and thus a high turnover rate (some employees quit after about a year).

In early signs of the staff cull, hate speech directed at various racial, religious and sexual minorities has increased. For example, the number of tweets using a racial slur directed at African-Americans rose by 500 per cent in the first 12 hours following Musk’s takeover. There has also been an uptick in anti-Semitic hate speech on the platform.

Most of these posts come from a small minority of around 300 troll accounts associated with right-wing America. This is not a surprising trend, given that Musk is understood to have been critical of the platform’s previous left-wing bias. It is also telling that shortly after sealing the deal to buy Twitter, Musk fired top executives who had been in favour of tighter content moderation.

One of them, Vijaya Gadde, as head of legal policy, trust, and safety, had played a crucial role in banning Donald Trump from Twitter in 2021, after the January 6 insurrecti­on at the Capitol.

Apart from right-wing extremism, other forms of extremism have also surged on Twitter. For instance, according to the Institute for Strategic Dialogue, a London-based counterter­rorism think tank, the number of new Islamic State Twitter accounts went up by 69 per cent soon after Musk’s takeover. Furthermor­e, amid Musk’s herky-jerky policies on verificati­on, there have been known instances of Islamic State Twitter accounts impersonat­ing a lingerie model, as well as an OnlyFans model with up to 10 million followers on different platforms.

With fewer content moderators around to take down such content, it is unclear how much more will pop up.

All of this does not bode well for a platform that already had content moderation problems, and where right-wing propaganda had been amplified over the years.

In the wake of the job cuts, two scenarios could unfold. First, it is possible that Musk will streamline and automate the content moderation process to such an extent that the human moderators who were laid off will not be replaced in future. This is difficult to imagine from both a management standpoint and a technical standpoint.

So far, Musk’s management strategy for Twitter seems to have been haphazard in execution, with instances of fired staff being asked to return. It is unclear if the billionair­e owner’s understand­ing of content moderation is as sound as his management strategy; the platform might be in for a tumultuous ride.

From a technical perspectiv­e, it is difficult to automate the process of monitoring tweets for several reasons. For starters, artificial intelligen­ce is not advanced enough to spot cues such as sarcasm and satire, which could mark out certain tweets as extreme.

Also, while this problem is bad enough with English-language posts, a lot of content is posted in many other languages, thus compoundin­g the limitation­s of artificial intelligen­ce.

Besides, hate speech parameters often change over time, as social media participan­ts come up with ways to beat content monitors, and this means that the knowledge base has to be constantly updated – which is surely a task more suitable to humans than machines, at least for now.

Against this backdrop, the second scenario is looking more likely: the increasing prevalence of extremist content could turn the platform into a cesspool. While observers hope this will not come to pass, current trends have yet to demonstrat­e that order will prevail in the medium to long term.

Shortly after sealing the deal to buy Twitter, Musk fired executives who had been in favour of tighter content moderation

Newspapers in English

Newspapers from China