If you’re looking for more evidence that Facebook will destroy us all, you won’t find it here. There are flaws in the machine, but it’s not too late to fix them.
seemingly any bit of information that has come to light regarding content moderation at a whole slew of past and present platforms, in addition to conducting interviews with decision makers and end users, reading through the posted community guidelines of the many platforms (he is perhaps the only person to have ever done so), and scrutinizing the “Facebook files,” documents leaked to last year that contain instructions on what Facebook moderators should keep or remove. (Keep: a picture of extremists with the caption, “They should be out playing .” Cut: a picture of extremists with the caption, “A great day.”) What he uncovers is a series of moderation systems that are driven by the economics of keeping users on a particular site, set by people in Silicon Valley who “tend to build tools ‘for all’ that continue, extend, and reify the inequities they overlook.” Moderation is performed by a combination of humans and AI detection tools. The former introduces issues regarding wages for workers in places like India and the Philippines, not to mention the psychological trauma of staring at horrific images and texts. Machine-learning algorithms still have a ways to go before they can detect problems without human oversight, and they raise their own concerns: “Machine-learning techniques are inherently conservative. The faith in sophisticated pattern recognition that underlies them is built on assumptions about people: that people who demonstrate similar actions or say similar things are similar, that people who have acted in a certain way in the past are likely to continue, that association suggests guilt.”
Even so, if you’re looking for more evidence that Facebook will destroy us all, you won’t find it here. There are flaws in the machine, but it’s not too late to fix them. Gillespie offers explicit guidance on how to do that. For instance, “Platforms should make a radical commitment to turning the data they already have back to me in a legible and actionable form, everything they could tell me contextually about why a post is there and how I should assess it.” But not all of Gillespie’s guidance is directed at Silicon Valley. All social media users are, to an extent, custodians of the internet, in that all engage with algorithmically designed news feeds. Some of us flag content we deem offensive; some of us block obnoxious (or worse) users. And all of us, regardless of whether or not we use these platforms, are influenced by the role of social media in public discourse. Among Gillespie’s conclusions is a call to action for us all: “We desperately need a thorough, public discussion about the social responsibility of platforms.”
— Grace Parazzoli