Penticton Herald

Facebook blames pandemic for lack of enforcemen­t

-

OAKLAND, Calif. — Looks like the machines aren’t ready to take over just yet.

The COVID-19 pandemic affected Facebook’s ability to remove harmful and forbidden material from its platforms, the company said Tuesday. Sending its content moderators to work from home in March amid the pandemic led the company to remove less harmful material from Facebook and Instagram around suicide, self-injury, child nudity and sexual exploitati­on.

Sending its human reviewers home meant that Facebook relied more on technology, rather than people, to find posts, photos and other content that violates its rules.

“Today’s report shows the impact of COVID19 on our content moderation and demonstrat­es that, while our technology for identifyin­g and removing violating content is improving, there will continue to be areas where we rely on people to both review content and train our technology,” Guy Rosen, Facebook’s vicepresid­ent of integrity, wrote in a blog post.

The company said Tuesday that it has since brought many reviewers back to working online from home and, “where it is safe,” a smaller number into offices.

But Facebook also said its systems have gotten better at proactivel­y detecting hate speech, meaning it is found and removed before anyone sees it. The company said its detection rate increased 6 points in the second quarter, to 95% from 89%. Facebook said it took action on 22.5 million pieces of content — like posts, photos or videos — for hate speech violations in the second quarter, up from 9.6 million in the first quarter.The social network said that’s because it has expanded its automation technology into Spanish, Arabic and Indonesian and made improvemen­ts to its English detection technology.

Newspapers in English

Newspapers from Canada