Los Angeles Times

Gangs of one hold sway on TikTok

Online trolls, coders use mass-reporting tools to have videos removed, even if they don’t violate rules.

- By Brian Contreras

One hundred forty-seven dollar signs fill the opening lines of the computer program. Rendered in an icy blue against a matte black background, each “$” has been carefully placed so that, all together, they spell out a name: “H4xton.”

It’s a signature of sorts, and not a subtle one. Actual code doesn’t show up until a third of the way down the screen.

The purpose of that code: to send a surge of content violation reports to the moderators of the wildly popular short-form video app TikTok, with the intent of getting videos removed and their creators banned.

It’s a practice called “mass reporting,” and for would-be TikTok celebritie­s, it’s the sort of thing that keeps you up at night.

As with many social media platforms, TikTok relies on users to report content they think violates the platform’s rules. With a few quick taps, TikTokers can flag videos as falling into specific categories of prohibited content — misleading informatio­n, hate speech, pornograph­y — and send them to the company for review. Given the immense volume of content that gets posted to the app, this crowdsourc­ing is an important weapon in TikTok’s content moderation arsenal.

Mass reporting simply scales up that process. Rather than one person reporting a post to TikTok, multiple people report it in concert, or — as programs such as H4xton’s purport to do — a single person uses automated scripts to send multiple reports.

H4xton, who described himself as a 14-year-old from Denmark, said he saw his TikTok Reportatio­n Bot as a force for good.

“I want to eliminate those who spread false informatio­n or … made fun of others,” he said, citing QAnon and anti-vax conspiracy theories. (He declined to share his real name, saying he was concerned about being doxxed, or having personal informatio­n spread online; The Times was unable to independen­tly confirm his identity.)

But the practice has become something of a boogeyman on TikTok, where having a video removed can mean losing a chance to go viral, build a brand or catch the eye of corporate sponsors. It’s an especially frightenin­g prospect because many TikTokers believe that mass reporting is effective even against

posts that don’t actually break the rules. If a video gets too many reports, they worry, TikTok will remove it, regardless of whether those reports were fair.

It’s a very 2021 thing to fear. The policing of usergenera­ted internet content has emerged as a hot-button issue, pitting proponents of free speech against those who seek to protect internet users from digital toxicity. Spurred by concerns about misinforma­tion and extremism — as well as events such as the Jan. 6 insurrecti­on — many Democrats have called for social media companies to moderate user content more aggressive­ly. Republican­s have responded with cries of censorship and threats to punish internet companies that restrict expression.

Mass-reporting tools exist for other social media platforms. But TikTok’s popularity and growth rate — it was the world’s most downloaded app last year — raise the stakes of what happens there for influencer­s and other power-users.

When The Times spoke this summer with a number of Black TikTokers about their struggles on the app, several expressed suspicion that organized mass-reporting campaigns had targeted them for their race and political outspokenn­ess, resulting in takedowns of posts that didn’t seem to violate site policies.

Other users — such as transgende­r and Jewish TikTokers, gossip blogger Perez Hilton and mega-influencer Bella Poarch — have similarly speculated that they’ve been restricted from using TikTok, or had their content removed from the platform, after bad actors co-opted the reporting system.

“TikTok has so much traffic, I just wonder if it gets to a certain threshold of people reporting [a video] that they just take it down,” said Jacob Coyne, 29, a TikToker focused on making Christian content who has struggled with video takedowns he thinks stem from massreport­ing campaigns.

H4xton posted his massreport­ing script on GitHub, a popular website for hosting computer code — but that’s not the only place such tools can be found. On YouTube, videos set to uptempo electronic­a walk curious viewers through where to find and how to run mass reporting software. Hacking and piracy forums with names such as Leak Zone, ELeaks and RaidForums offer similar access. Under download links for mass-reporting scripts, anonymous users leave comments such as “I need my girlfriend off of TikTok” and “I really want to see my local classmates banned.”

The opacity of most social media content moderation makes it hard to know how big of a problem mass reporting is.

Sarah T. Roberts, an associate professor at UCLA and co-founder of its Center for Critical Informatio­n Inquiry, said social media users experience content moderation as a complicate­d, dynamic, often opaque web of policies that makes it “difficult to understand or accurately assess” what they did wrong.

“Although users have things like Terms of Service and Community Guidelines, how those actually are implemente­d in their granularit­y — in an operationa­l setting by content moderators — is often considered proprietar­y informatio­n,” Roberts said. “So when [content moderation] happens, in the absence of a clear explanatio­n, a user might feel that there are circumstan­ces conspiring against them.

“The creepiest part,” she added, “is that in some cases that might be true.”

Such cases include instances of “brigading,” or coordinate­d campaigns of harassment in the form of hostile replies or downvotes. Forums such as the notoriousl­y toxic 8chan have historical­ly served as home bases for such efforts.

Prominent politician­s, including former President Trump and Sen. Ted Cruz, have, without evidence, accused Twitter of “shadowbann­ing,” or suppressin­g the reach of certain users’ accounts without telling them.

TikTok has downplayed the risk that mass reporting poses to users and says it has systems in place to prevent the tactic from succeeding. A statement the company put out in July said that although certain categories of content are moderated by algorithms, human moderators review reported posts. Last year, the company said it had more than 10,000 employees working on trust and safety efforts.

The company has also said that mass reporting “does not lead to an automatic removal or to a greater likelihood of removal” by platform moderators.

Some of the programmer­s behind automated mass-reporting tools affirm this. H4xton — who spoke with The Times over a mix of online messaging apps — said his Reportatio­n Bot can lead to the removal only of TikToks that legitimate­ly violate the platform’s rules. It can speed up a moderation process that might otherwise take days, he said, but “won’t work if there is not anything wrong with the video.”

Filza Omran, a 22-yearold Saudi coder who identified himself as the author of another mass-reporting script posted on GitHub, said that if his tool was used to report a video that didn’t break any of TikTok’s rules, the most he thinks would happen would be that the reported account would get briefly blocked from posting new videos. Within minutes, Omran said over the messaging app Telegram, TikTok would confirm that the reported video hadn’t broken any rules and would restore the user’s full access.

But other people involved in this shadow economy make more sweeping claims. One of the scripts circulated on hacker forums comes with a descriptio­n: “Quick little bot I made.

Mass reports an account til it gets banned which takes about an hour.”

A user The Times found in the comments section below a different mass-reporting tool, who identified himself as an 18-year-old Hungarian named Dénes Zarfa Szú, said he has enlisted such tools “to massreport bully posts” and accounts peddling sexual content. He said the limiting factor on the tools’ efficacy has been how popular a post was, not whether that post broke any rules.

“You can take down almost anything,” Szú said in an email, as long as it’s not “insanely popular.”

And a 20-year-old programmer from Kurdistan who goes by the screen name Mohamed Linux said a mass-reporting tool he made could get videos deleted even if they didn’t break any rules.

These are difficult claims to prove without back-end access to TikTok’s moderation system — and Linux, who discussed his work via Telegram and declined to give his name due to privacy concerns, said his program no longer works because TikTok fixed a bug he’d been exploiting. (The Times found Linux’s code on GitHub, although Linux said that it had been leaked there and that he normally sells it to private individual­s for $50.)

Yet the lack of clarity around how well mass reporting works hasn’t stopped it from capturing the imaginatio­ns of TikTokers, many of whom lack better answers as to why their videos keep disappeari­ng. In the comments section below a recent statement TikTok made acknowledg­ing concerns about mass reporting, swarms of users — some with millions of followers — complained that the practice had led to their posts and accounts getting banned for unfair or altogether fabricated reasons.

Among those critics was Allen Polyakov, a gamer and TikTok creator affiliated with the esports organizati­on Luminosity Gaming, who wrote that the platform had “taken down many posts and streams of mine because I’ve been mass reported.”

Elaboratin­g on those complaints later, he told The Times that mass reporting became a big issue for him only after he began getting popular on TikTok.

“Around summer of last year, I started seeing that a lot of my videos were getting taken down,” said Polyakov, 27. But he couldn’t figure out why certain videos had been removed. “I would post a video of me playing ‘Fortnite,’ and it would get taken down” after being falsely flagged for containing nudity or sexual activity, he said.

The seemingly nonsensica­l nature of the takedowns led him to think trolls were mass reporting his posts. It wasn’t pure speculatio­n: He said people have come into his livestream­s and bragged about successful­ly mass reporting his content, needling him with taunts of “We got your video taken down” and “How does it feel to lose a viral video?”

Polyakov made clear that he loves TikTok.

“It’s changed my life and given me so many opportunit­ies,” he said.

But the platform seems to follow a “guilty till proven innocent” ethos, he said, which errs on the side of removing videos that receive lots of reports, then leaves it up to creators to appeal the decisions after the fact.

Those appeals can take a few days, he said, which might as well be a millennium, given TikTok’s fastmoving culture.

“I would win most of my appeals — but because it’s already down for 48 to 72 hours, the trend might have went away; the relevance of that video might have went away,” he said.

As with many goods and services that exist on the periphery of polite society, there’s no guarantee that mass-reporting tools will work. Complaints about broken links and useless programs are common on the hacker forums where such software is posted.

But technical reviews of several mass-reporting tools posted on GitHub — including those written by H4xton, Omran and Linux — suggest that this cottage industry is not entirely smoke and mirrors.

Francesco Bailo, a lecturer in digital and social media at the University of Technology Sydney, said what the tools “claim to do is not technicall­y complicate­d.”

“Do they work? Possibly they worked when they were first written,” Bailo said in an email. But the programs “don’t seem to be actively maintained,” which is essential given that TikTok is probably “monitoring and contrastin­g this kind of activity” in a sort of coding arms race.

Patrik Wikstrom, a communicat­ion professor at the Queensland University of Technology, was similarly circumspec­t.

“They might work, but they most likely need a significan­t amount of handholdin­g to do the job well,” Wikstrom said via email. Because TikTok doesn’t want content reports to be sent from anywhere but the confines of the company’s own app, he said, mass reporting requires some technical trickery: “I suspect they need a lot of manual work not to get kicked out.”

However unreliable mass-reporting tools are — and however successful TikTok is in separating the complaints they generate from more legitimate ones — influencer­s including Coyne and Polyakov insist that the problem is one the company needs to start taking more seriously.

“This is literally the only platform that I’ve ever had any issues” on, Polyakov said. “I can post any video that I have on TikTok anywhere else, and it won’t be an issue.

“Might you get some kids being assholes in the comments?” he said. “Yeah — but they don’t have the ability to take down your account.”

‘TikTok has so much traffic, I just wonder if it gets to a certain threshold of people reporting [a video] that they just take it down.’ — JACOB COYNE, a TikToker who believes his Christian videos have been targeted by mass-reporting campaigns

 ?? Kirill Kudryavtse­v AFP/Getty Images ?? FOR TIKTOK celebritie­s, “mass-reporting” programs are a major concern. One person armed with such a tool can f lag content for violating the platform’s rules and have the videos removed and their creators banned.
Kirill Kudryavtse­v AFP/Getty Images FOR TIKTOK celebritie­s, “mass-reporting” programs are a major concern. One person armed with such a tool can f lag content for violating the platform’s rules and have the videos removed and their creators banned.

Newspapers in English

Newspapers from United States