Facebook removes 583m fake accounts in three months
Facebook revealed yesterday that it removed more than half a billion fake accounts and millions of pieces of violent or obscene content during the first three months of this year, pledging more transparency while shielding its chief executive from new public questioning about the company’s business practices.
The findings, its first public look at internal moderation figures, illustrate the gargantuan task Facebook faces in cleaning up the world’s largest social network.
“My top priorities this year are keeping people safe and developing new ways for our community to participate in governance and holding us accountable,” wrote Facebook CEO Mark Zuckerberg in a post.
Facebook said it removed 583 million fake accounts, 21 million pieces of content featuring sex or nudity, 2.5 million pieces of hate speech and almost 2 million items related to terrorism by al-Qaeda and Isis in the first quarter of 2018.
For every 10,000 views of content, the company said, roughly eight of them were removed for featuring sex or nudity in the first quarter, up from seven views at the end of last year.
Facebook’s new report, which it plans to update twice a year, comes a month after the company published its internal rules for how reviewers decide what content should be removed. The company says it has 10,000 human moderators helping to remove objectionable content and plans to double that number by the end of the year.
Facebook’s report suggests its investment in AI that can help moderate objectionable content is slowly paying off. The company says more than 96 per cent of the posts removed by Facebook for featuring sex, nudity or terrorism-related content were flagged by monitoring software before any users reported them. But users are still reporting the majority of hate-speech posts, or about 62 per cent of them, before Facebook takes them down.