Musk cleansing Twitter of child abuse content isn't going well
Over 120,000 views of a video showing a boy being sexually assaulted. A recommendation engine suggesting that a user follow content related to exploited children. Users continually posting abusive material, delays in taking it down when it is detected and friction with organizations that police it.
All since Elon Musk declared that “removing child exploitation is priority #1” in a tweet in late November.
Under Musk's ownership, Twitter's head of safety, Ella Irwin, said she had been moving rapidly to combat child sexual abuse material, which was prevalent on the site — as it is on most tech platforms — under the previous owners. “Twitter 2.0” will be different, the company promised.
But a review by The New York Times found that the imagery, commonly known as child pornography, persisted on the platform, including widely circulated material that authorities consider the easiest to detect and eliminate.
After Musk took the reins in late October, Twitter largely eliminated or lost staff experienced with the problem and failed to prevent the spread of abusive images previously identified by authorities, the review shows. Twitter also stopped paying for some detection software considered key to its efforts.
All the while, people on dark-web forums discuss how Twitter remains a platform where they can easily find the material while avoiding detection, according to transcripts of those forums from an antiabuse group that monitors them.
Irwin and others at Twitter said their efforts under Musk were paying off. During the first full month of the new ownership, the company suspended nearly 300,000 accounts for violating “child sexual exploitation” policies, 57% more than usual, the company said.