San Francisco Chronicle

Facebook lists content it took down from site

- By Sheera Frenkel

Facebook has been under pressure for its failure to remove violence, nudity, hate speech and other inflammato­ry content from its site. Government officials, activists and academics have long pushed the social network to disclose more about how it deals with such posts.

Now, Facebook is pulling back the curtain on its efforts.

On Tuesday, the Silicon Valley company published numbers for the first time detailing how much and what type of content it takes down from the social network. In an 86-page report, Facebook revealed that it deleted 865.8 million posts in the first quarter of 2018, the vast majority of which were spam, with a minority of posts related to nudity, graphic violence, hate speech and terrorism.

Facebook also said it removed 583 million fake accounts in the same period, the equivalent of 3

to 4 percent of its monthly users.

Guy Rosen, vice president of product management, said Facebook has substantia­lly increased its efforts over the past 18 months to flag and remove inappropri­ate content. The inaugural report was intended to “help our teams understand what is happening” on the site, he said. Facebook hopes to continue publishing reports about its content removal every quarter.

The social network is trying for more transparen­cy after a turbulent period. Facebook has been under fire for a proliferat­ion of false news, divisive messages and other inappropri­ate content on its site, which in some cases have led to reallife incidents. Graphic violence continues to be widely shared on Facebook, especially in countries like Myanmar and Sri Lanka, stoking tensions and helping to fuel attacks.

Facebook has separately been grappling with a data privacy scandal over the improper harvesting of millions of its users’ informatio­n by political consulting firm Cambridge Analytica. CEO Mark Zuckerberg has said that the company needs to do better and has pledged to curb the abuse of its service.

On Monday, as part of an attempt to improve protection of its users’ informatio­n, Facebook said it had suspended roughly 200 third-party apps that collected data from its members while it does a thorough investigat­ion.

Tuesday’s report on content removal is another step by Facebook to clean up its site. But the figures the company published were limited. Facebook declined to provide examples of graphicall­y violent posts or hate speech that it removed, for example. And it said it had taken down more posts from its site in the first three months of 2018 than it had during the last quarter of 2017, but it gave no specific figures from previous years, making it hard to assess how much it had stepped up its efforts.

Still, Jillian York, the director for internatio­nal freedom of expression at the Electronic Frontier Foundation, said she welcomed Facebook’s report.

“It’s a good move, and it’s a long time coming,” she said. “But it’s also frustratin­g, because we’ve known that this has needed to happen for a long time. We need more transparen­cy about how Facebook identifies content and what it removes.”

Facebook previously declined to reveal its contentrem­oval efforts, instead publishing a country-by-country breakdown of how many requests it received from government­s to obtain Facebook data or restrict content from users in that country. Those figures did not specify what type of data the government­s asked for or what posts were restricted. Facebook also published its latest country-by-country report Tuesday.

According to the report, about 97 percent of all the content that Facebook removed from its site in the first quarter was spam. About 2.4 percent of the deleted content had nudity, with even smaller percentage­s of posts removed for graphic violence, hate speech and terrorism.

Facebook attributed the increase in content removal in the first quarter to improved artificial intelligen­ce programs that could detect and flag offensive content. Zuckerberg has long highlighte­d AI as key to helping Facebook sift through the billions of pieces of content that people post to its site every day.

“If we do our job really well, we can be in a place where every piece of content is flagged by artificial intelligen­ce before our users see it,” said Alex Schultz, Facebook’s vice president of data analytics. “Our goal is to drive this to 100 percent.”

According to the new report, AI found 99.5 percent of terrorist content on the site, leading to the removal of roughly 1.9 million pieces of content in the first quarter. It also detected 95.8 percent of posts that were problemati­c because of nudity, with 21 million such posts taken down.

But Facebook relied on human moderators to identify hate speech, because automated programs have a hard time understand­ing context and culture. Of the 2.5 million pieces of hate speech Facebook removed in the first quarter, 38 percent was detected by AI, according to the new report.

Facebook said it also removed 3.4 million posts that had graphic violence, 85.6 percent of which were detected by AI.

 ?? Michael Macor / The Chronicle ?? Facebook says it stepped up its policing of content during the first quarter.
Michael Macor / The Chronicle Facebook says it stepped up its policing of content during the first quarter.

Newspapers in English

Newspapers from United States