Facebook to pay $52m to moderators scarred by disturbing images
FACEBOOK has agreed to pay $52m (€48m) to settle a lawsuit by content moderators who alleged they suffered psychological scars from repeated exposure to disturbing material, including images of child sex abuse and terrorism.
The settlement will cover more than 10,000 current and former content moderators in California,
Arizona, Texas and Florida, who can receive $1,000 for medical screening as well as additional payments for treatment if required, the lawyers said.
Facebook will also provide on-site coaching and tools which allow the moderators more control over how they view the images to mitigate their exposure to the material, according to the proposed settlement.
“We are so pleased that
Facebook worked with us to create an unprecedented programme to help people performing work that was unimaginable even a few years ago,” said Steve Williams, one of the attorneys representing the moderators. “The harm that can be suffered from this work is real and severe.”
In the wake of the 2016 presidential election, Facebook rushed to expand efforts to police its platforms, trying to keep political misinformation, graphic violence, terrorist propaganda and ‘revenge porn’ off the sites. This has entailed both new technology and thousands of new workers. As of last year, Facebook had about 15,000 content reviewers, almost all of whom worked not for Facebook itself but for staffing firms like Accenture and Cognizant.
The settlement comes after many content moderators were sidelined by the pandemic because of security and privacy concerns over the work being done from home.
Facebook has been slowly bringing the moderators back online but the company is investing more in artificial intelligence over the long term than in humans.
The ruling may open the way for moderators in other countries to pursue similar compensation for medical expenses or damage to mental health.
Some of Ireland’s 4,000 Facebook workers deal with moderation duties.
A spokeswoman for the company had no comment on possible remedies for Irish moderators.
The company released new figures this week outlining the number of instances of abuse and disturbing content it removes.