Child porn still accessible on Twitter, review finds
Social media giant struggles to remove abusive imagery despite Musk’s vow
More than 120,000 views of a video showing a boy being sexually assaulted. A recommendation engine suggesting that a user follow content related to exploited children. Users continually posting abusive material, delays in taking it down when it is detected and friction with organizations that police it.
All since Elon Musk declared that “removing child exploitation is priority #1” in a tweet in late November.
Under Musk’s ownership, Twitter’s head of safety, Ella Irwin, said she had been moving rapidly to combat child sexual abuse material, which was prevalent on the site — as it is on most tech platforms — under the previous owners. “Twitter 2.0” will be different, the company promised.
But a review by The New York Times found that the imagery, commonly known as child pornography, persisted on the platform, including widely circulated material that authorities consider the easiest to detect and eliminate.
After Musk took the reins in late October, Twitter largely eliminated or lost staff experienced with the problem and failed to prevent the spread of abusive images previously identified by authorities, the review shows. Twitter also stopped paying for some detection software.
All the while, people on dark-web forums discuss how Twitter remains a platform where they can easily find the material while avoiding detection, according to transcripts of those forums from an anti-abuse group that monitors them.
In a Twitter audio chat with Irwin in early December, an independent researcher working with Twitter said illegal content had been publicly available on the platform for years and garnered millions of views.
But Irwin and others at Twitter said their efforts under Musk were paying off. During the first full month of the new ownership, the company suspended nearly 300,000 accounts for violating “child sexual exploitation” policies, 57% more than usual, the company said.
The effort accelerated in January, Twitter said, when it suspended 404,000 accounts.
“Our recent approach is more aggressive,” the company said in a series of tweets last week. It said it had also cracked down on people who search for the exploitative material and had reduced successful searches by 99% since December.
Irwin, in an interview, said the bulk of suspensions involved accounts that engaged with the material or were claiming to sell or distribute it, rather than those that posted it. She did not dispute that child sexual abuse content remains available on the platform, saying that “we absolutely know that we are still missing some things that we need to be able to detect better.”
She added that Twitter was hiring employees and deploying “new mechanisms” to fight the problem.
Wired, NBC and others have detailed Twitter’s ongoing struggles with child abuse imagery under Musk.
On Jan. 31, Sen. Dick Durbin, D-ill., asked the Justice Department to review Twitter’s record in addressing the problem.