The New York Review of Books

The Cleaners a PBS Independen­t Lens documentar­y film directed by Moritz Riesewieck and Hans Block

The Facebook Dilemma a PBS Frontline documentar­y television series directed by James Jacoby

- Sue Halpern

The Cleaners a PBS Independen­t Lens documentar­y film directed by Moritz Riesewieck and Hans Block

The Facebook Dilemma a PBS Frontline documentar­y television series directed by James Jacoby

Fifteen minutes into The Cleaners, the unsettling documentar­y about the thousands of anonymous “content moderators” working behind the scenes in third-world countries for Facebook, Instagram, and other social media companies, the filmmakers provide a perfect—if obvious—visual metaphor: they show a young Filipino woman walking through a garbage-strewn Manila slum as children pick through a trash heap. “My mom always told me that if I don’t study well, I’ll end up a scavenger,” she says. “All they do is pick up garbage. They rely on garbage. It’s the only livelihood they know . . . . I was afraid of ending up here, picking up garbage. It was one of the reasons that drove me to study well.” Instead, studying well landed her in a cubicle in an obscure office building, picking through the detritus of human behavior—the photos of child sexual exploitati­on, the calls to murder, the suicide videos, 25,000 items a day, with an allowance of only three errors per month before getting sacked—and deciding in an instant what should be deleted and what can stay.

“I’ve seen hundreds of beheadings in my complete career for content moderation,” a nameless young man says. “Not only pictures, even the videos. A two-minute video of the act of beheading.” Another talks of watching someone kill himself online, thinking at first it was a joke. And then there’s a young woman—they are all young— who confesses to having been sexually naive before taking the job: “The most shocking thing that I saw...was a kid sucking a dick inside a cubicle. And the kid was like really naked. It was like a girl around six years of age.” Before she worked as a content moderator, she says, she had never heard the word “dick,” let alone seen one.

To be clear, these were images that had already appeared on social media and had been flagged by users or by Facebook algorithms for possibly violating a site’s “community standards,” a nebulous term that seems to mean “stuff that could get us in trouble with someone,” like a government or a cohort of users.

Those standards, like much that has been created in Silicon Valley, grow out of a hands-off, responsibi­lity-shunning, libertaria­n ethos. “We’re going to allow people to go right up to the edge [of what’s acceptable] and we’re going to allow other people to respond,” Tim Sparapani, Facebook’s director of public policy from 2009 to 2011, tells the journalist James Jacoby, whose two-part Frontline documentar­y, The Facebook Dilemma, offers the best background yet for everything we’ve been reading and hearing about the company’s derelictio­ns these past few years. “We had to set up some ground rules,” Sparapani continues. “Basic decency, no nudity, and no violent or hateful speech. And after that we felt some reluctance to interpose our value system on this worldwide community that was growing.” Yet these rules are simultaneo­usly so vague and so exacting that content moderators viewing the famous image from the Vietnam War of a naked girl running down the road during a napalm attack chose to delete it because she wasn’t wearing clothes. They are continuall­y removing videos from Syria posted by the artist and photograph­er Khaled Barakeh—who has left his homeland for Berlin and, in the absence of traditiona­l journalism, uses his Facebook page as an ad hoc clearingho­use of informatio­n about the war there—because the rules make no distinctio­n between intent and content. At the same time, neo-Nazi and ISIS recruitmen­t videos remain on Facebook, which also recently hosted an auction for a child bride in South Sudan. Facebook executives had little problem allowing Donald Trump’s egregious, race-baiting comments about Muslims to be broadcast on the site during his presidenti­al campaign because, as The New York Times reported in its chilling recent exposé, they decided “that the candidate’s views had public value.”* This, perhaps, should not have been surprising. Hate speech, propaganda, and incitement­s to violence have found a home on a site whose developers pride themselves on both “connecting the world” and upholding “free speech.” If it wasn’t obvious before, this became unmistakab­ly clear in the days and weeks following the Arab Spring in 2011, when antidemocr­atic forces in Egypt used Facebook to spread disinforma­tion and incite sectarian violence. “The hardest part for me was seeing the tool that brought us together tearing us apart,” says Wael Ghonim, the Google employee whose Facebook page was widely credited with driving the prodemocra­cy movement. “These tools are just enablers for whomever. They don’t separate between what’s good and bad. They just look at engagement metrics.” Since then, Facebook has been used to abet genocide in Myanmar, India, and Sri Lanka, as well as in Nigeria, where the company has just four “fact checkers” to assess content on a platform used by twenty-four million Nigerians.

Facebook’s response to these atrocities has been at best muted. The party line, articulate­d by employee after employee to Jacoby in the Frontline series, is that the company was “too slow” to recognize the ways in which the platform could be, and had been, used maliciousl­y. This includes its response to interferen­ce in the US presidenti­al election, when Russian operatives seeded divisive content throughout Facebook on gun rights and gay rights and other hot-button issues. As the Times reporters point out, CEO Mark Zuckerberg’s initial, awshucks denial a month after the election—he said that he couldn’t imagine this made-in-a-college-dorm-room creation of his had that much influence— gave way to more concerted efforts within the company to downplay Facebook’s part in disseminat­ing propaganda and ill-will. Its April 2017 paper highlighti­ng the findings of the company’s internal investigat­ion into election meddling never mentions Russia, even though the company was aware of the Russian influence campaign. Five months later, in a company blog post, Facebook continued to minimize its influence, claiming that the total cost of Russian ads on the platform was a mere $100,000, for about three thousand ads. Finally, in October 2017, the company admitted that close to 126 million people had seen the Russian Facebook ads. Such prevaricat­ions are the Facebook way. As The New York Times has reported, the company has continued to share user data, including private messages, with third parties like Netflix and Spotify, even after claiming numerous times that it had stopped the practice. It also gave access to the Russian search firm Yandex, which is reputed to have ties to Russian intelligen­ce. This past November, after the Times revealed that the company had hired the Republican opposition research firm Definers Public Affairs to, among other things, circulate untrue stories that the philanthro­pist George Soros had a financial interest in publicly criticizin­g Facebook—stories that fed into the anti-Semitic memes about Soros that circulate on social media— its top two executives, Zuckerberg and Chief Operating Officer Sheryl Sandberg, claimed that they had no idea that this had happened.

“I did not know we hired them or about the work they were doing,” Sandberg wrote in a blog post, challengin­g the veracity of the Times article. But a week later, in a new post, she recanted, admitting that, actually, “some of their work was incorporat­ed into materials presented to me and I received a small number of emails where Definers was referenced.” (Eventually it came out that after Soros’s particular­ly fierce critique of social media at the World Economic Forum in January 2017, Sandberg had ordered an investigat­ion into whether the financier was shorting Facebook stock, though Definers’ work for Facebook began before that.) Sandberg appeared to be following the Zuckerberg playbook: “I think it’s more useful to, like, make things happen and then, like, apologize later, than it is to make sure that you dot all your I’s now and then, like, just not get stuff done,” he says in The Facebook Dilemma. Over the years, he and Sandberg have done a lot of apologizin­g.

They have also gotten a lot of stuff done since 2008, when Zuckerberg hired Sandberg away from Google to shore up and run the business side of the company. Until then, Facebook had focused on building its user base, but Sandberg’s arrival brought a more ambitious pursuit, a continuati­on of her work at Google: to turn Facebook into a colossal advertisin­g platform by harvesting the innumerabl­e bits of personal data people were posting and sharing on the site. To lure advertiser­s, Sandberg’s team developed new ways to obtain personal data from users as they traversed the Internet. They also collect data from people who are not Facebook users but who happen to visit Internet sites that use Facebook’s technology. To this informatio­n they added data purchased from brokers like Acxiom and Experian, which further refined Facebook’s ability to track people when they weren’t online, and to parse individual­s with increasing specificit­y, enabling ever-more-targeted ads. In an example of how Facebook continues to cash in on this data, a few days after the recent Pittsburgh synagogue shooting—in which eleven congregant­s were murdered by Robert Bowers, whose page on the social media site Gab was filled with anti-Semitic rants—The Intercept found that Facebook allowed advertiser­s to send ads to people who had expressed an interest in “white genocide conspiracy theory,” a category with 168,000 potential members.

For Facebook’s business model to work, the data stream has to flow robustly, and it has to keep growing. Zuckerberg’s

mantra, repeated over and over, that the goal of Facebook was “to connect the world” turns out not to be about creating a borderless digital utopia where the whole world gets along, but about ensuring the company’s bottom line. “The [Facebook] Growth team had tons of engineers figuring out how you could make the new user experience more engaging, how you could figure out how to get more people to sign up,” Facebook’s former operations manager, Sandy Parakilas, tells Frontline. “Everyone was focused on growth, growth, growth.”

While the formula they came up with was quite simple—growth is a function of engagement—it so happened that engagement was best served by circulatin­g sensationa­l, divisive, and salacious content. Allowing discordant and false material on the platform was not a glitch in the business plan—it was the plan. In the United States, at least, Facebook was able to take cover behind Section 230 of the Communicat­ions Decency Act, which basically says that a platform provider is not responsibl­e for the material disseminat­ed on its platform, or for its consequenc­es. It is also what has enabled Facebook to publish first and delete second—or not at all. If Zuckerberg, Sandberg, and other Facebook employees were unaware that their platform could be hijacked by malicious actors before the Arab Spring, and if, afterward, they failed to hear alarm bells ringing, it was because they were holding their hands over their ears. From 2012 to 2015, analysts at the Defense Advanced Research Projects Agency (DARPA), the research arm of the Department of Defense, published more than two hundred papers and reports detailing the kinds of manipulati­on and disinforma­tion they were seeing on Facebook and other social media. Around the same time, the Internet Research Agency, the Russian propaganda factory that was active on social media during the 2016 US presidenti­al election, was honing its craft in Ukraine, sending out all kinds of false and inflammato­ry stories over Facebook, provoking a long-simmering ethnic conflict in an effort to fracture the country from within. “The response that Facebook gave us is, ‘Sorry we are an open platform. Anybody can do anything . . .within our policy, which is written on the website,’” Dmytro Shymkiv, an adviser to Ukrainian president Petro Poroshenko, told Frontline. “And when I said, ‘But this is fake accounts, you could verify that,’ [they said,] ‘Well, we’ll think about this, but you know, we have freedom of speech and we are a very pro-democracy platform. Everybody can say anything.’” By now it should be obvious that Facebook’s so-called pro-democracy rhetoric has been fundamenta­lly damaging to real democracie­s and to democratic movements around the world. It has also directly benefited authoritar­ian regimes, which have relied on the platform to spread untruths in order to control and manipulate their citizens. In the Philippine­s, as content moderators busily remove posts and pictures according to a bespoke metric developed by “mostly twenty-something-year-olds” in Menlo Park, California, the president, Rodrigo Duterte, is busy on Facebook too, using paid followers to spread falsehoods about his critics and his policies. The journalist Maria Ressa, whose news organizati­on, Rappler, has been keeping a database of the more than twelve million Facebook accounts that have attacked critics of Duterte and have been traced back to the president, has been a target of those accounts as well, at one point getting as many as ninety hate messages an hour via Facebook—messages like “I want Maria Ressa to be raped repeatedly to death.”

Facebook favors democratic norms selectivel­y—when it is financiall­y expedient—and abandons them when it’s not. Facebook’s general counsel, Colin Stretch, described it to the Senate Select Committee on Intelligen­ce this way:

We do have many instances where we have content reported to us from foreign government­s that is illegal under the laws of those government­s .... We deploy what we call geoblockin­g or IP blocking, so that the content will not be visible in that country.

This is best illustrate­d by the company’s actions in Turkey, where, according to Yaman Akdeniz, a law professor at Istanbul Bilgi University, “Facebook removes everything and anything from their social media platform when the Turkish authoritie­s ask them to do so.” If they don’t, he says, the company will be blocked and lose business.

While Facebook is currently shut out of the Chinese market, the company has not ruled out finding a way to operate there in spite of the country’s robust censorship laws, and last summer it establishe­d a Chinese subsidiary. But perhaps most telling was an exchange between Zuckerberg and Ressa that she recounted during an interview with Recode’s Kara Swisher. Ressa was explaining to Zuckerberg how critics of the Duterte regime were being threatened on Facebook with calls for them to be raped and killed:

I said, “Mark, 97 percent of Filipinos on the Internet are on Facebook.” I invited him to come to the Philippine­s because he had to see the impact of this. You have to understand the impact .... He was frowning while I was saying that. I said, “Why, why?” He said, “Oh well. What are the other 3 percent doing, Maria?”

Ninety-seven percent is a useful statistic to keep in mind while listening to Monika Bickert, Facebook’s head of global policy management, explain in The Facebook Dilemma that “probably the group that holds us the most accountabl­e are the people using the service. If it’s not a safe place for them to come and communicat­e, they are not going to use it.” But in countries like the Philippine­s and Myanmar, where the vast majority of people access the Internet through Facebook, not using the platform is likely not an option. Indeed, establishi­ng an equivalenc­e between Facebook and the Internet is one of the payoffs of Free Basics, an app Facebook created that provides purposeful­ly limited Internet access— there is no stand-alone e-mail server and Facebook is the only social media platform—to people in developing countries who wouldn’t otherwise be able to afford to go online. Of course, Facebook captures user data since all user activity passes through its servers. (Free Basics is available in the Philippine­s and Nigeria, among many other countries. India, however, banned it after protests accusing the company of cultural imperialis­m and digital colonialis­m.) But even if those who feel unsafe were to leave Facebook, as Bikert suggests, they remain vulnerable to the violence being fomented against them on the platform—violence that, as we have seen, even in this country, cannot be sequestere­d online.

“We are working with partners . . . to analyze potentiall­y harmful content and understand how it spreads in Myanmar,” the company wrote in early November 2018, in response to a report it commission­ed about its part in the genocide there. “We also just extended the use of artificial intelligen­ce to posts that contain graphic violence and comments that are violent and dehumanizi­ng, and will reduce their distributi­on while they undergo review by our Community Operations team.” The insufficie­ncy of these ex-post-facto strategies should be obvious: they are triggered by potentiall­y dangerous content, but they cannot preempt it.

Nor do Facebook’s well-publicized efforts to remove violent and hateful pages and individual­s rid the platform of violence or hate, since it continues to allow private and secret Facebook groups where malevolent actors can organize and amplify their message with little oversight and no adherence to “community standards.” Such are the consequenc­es of the company’s socalled pro-democracy ideology. Even more, this is what happens when a forprofit tech company with dominion over two billion people has little will and less expertise to govern or be governed. It might have seemed that the 2016 US presidenti­al election was a turning point. The evidence—despite Facebook’s distortion­s—was clear: the platform was used by Russian operatives to sow discord, and, as the Trump campaign also did, to dissuade AfricanAme­ricans from voting. In response, the company instituted a new political advertisin­g policy, enacted in time for the 2018 midterms, intended to prevent foreign nationals from buying ads and promoting content designed to sway the electorate. The policy requires anyone purchasing a political ad to provide documentat­ion that they are an American citizen, and for each ad to reveal its provenance. But beyond that, Facebook does not require or check to see that the person who manages the ad is the purchaser of the ad. An investigat­ion by ProPublica uncovered a dozen ad campaigns paid for by nonexisten­t companies created by businesses and individual­s, including fossil fuel and insurance companies, to hide their funders. And Jonathan Albright, a professor at Columbia’s Tow Center for Digital Journalism, found “political funding groups being managed by accounts based outside the United States.”

Would government regulation be more exacting? For the time being, there is no way to know. In April, in testimony before Congress, Zuckerberg told Senator Amy Klobuchar that he would support the Honest Ads Act, a bipartisan effort to ensure full disclosure of the money behind political ads on the Internet. But behind the scenes, his company was lobbying hard to kill the bill. One reason, according to a congressio­nal staffer interviewe­d by Quartz, is that Facebook felt it was voluntaril­y doing what the law would require, though this appears to be an overly optimistic—or arrogant or ignorant—assessment of its own efforts. “Facebook is an idealistic and optimistic company,” Zuckerberg said in his prepared congressio­nal testimony that day in April. More recently he told his colleagues that the company is “at war,” and vowed to adopt a more aggressive management style. The Facebook dilemma, going forward, is not how to reconcile the two. It’s that no matter how optimistic its outlook or obdurate its leader, an online business that publishes first and moderates later will always be available to those who aim to do real harm in the real world. —December 19, 2018

 ??  ?? Mark Zuckerberg testifying at a Senate hearing about Facebook’s use of user data, Washington, D.C., April 2018
Mark Zuckerberg testifying at a Senate hearing about Facebook’s use of user data, Washington, D.C., April 2018
 ??  ??

Newspapers in English

Newspapers from United States