The Clean­ers a PBS In­de­pen­dent Lens doc­u­men­tary film di­rected by Moritz Riesewieck and Hans Block

The Face­book Dilemma a PBS Front­line doc­u­men­tary tele­vi­sion se­ries di­rected by James Ja­coby

The New York Review of Books - - Contents - Sue Halpern

The Clean­ers a PBS In­de­pen­dent Lens doc­u­men­tary film di­rected by Moritz Riesewieck and Hans Block

The Face­book Dilemma a PBS Front­line doc­u­men­tary tele­vi­sion se­ries di­rected by James Ja­coby

Fif­teen min­utes into The Clean­ers, the un­set­tling doc­u­men­tary about the thou­sands of anony­mous “con­tent mod­er­a­tors” work­ing be­hind the scenes in third-world coun­tries for Face­book, In­sta­gram, and other so­cial me­dia com­pa­nies, the film­mak­ers pro­vide a per­fect—if ob­vi­ous—vis­ual metaphor: they show a young Filipino woman walk­ing through a garbage-strewn Manila slum as chil­dren pick through a trash heap. “My mom al­ways told me that if I don’t study well, I’ll end up a scav­enger,” she says. “All they do is pick up garbage. They rely on garbage. It’s the only liveli­hood they know . . . . I was afraid of end­ing up here, pick­ing up garbage. It was one of the rea­sons that drove me to study well.” In­stead, study­ing well landed her in a cu­bi­cle in an ob­scure of­fice build­ing, pick­ing through the de­tri­tus of hu­man be­hav­ior—the pho­tos of child sex­ual ex­ploita­tion, the calls to mur­der, the sui­cide videos, 25,000 items a day, with an al­lowance of only three er­rors per month be­fore get­ting sacked—and de­cid­ing in an in­stant what should be deleted and what can stay.

“I’ve seen hun­dreds of be­head­ings in my com­plete ca­reer for con­tent mod­er­a­tion,” a name­less young man says. “Not only pic­tures, even the videos. A two-minute video of the act of be­head­ing.” An­other talks of watch­ing some­one kill him­self on­line, think­ing at first it was a joke. And then there’s a young woman—they are all young— who con­fesses to hav­ing been sex­u­ally naive be­fore tak­ing the job: “The most shock­ing thing that I saw...was a kid suck­ing a dick in­side a cu­bi­cle. And the kid was like re­ally naked. It was like a girl around six years of age.” Be­fore she worked as a con­tent mod­er­a­tor, she says, she had never heard the word “dick,” let alone seen one.

To be clear, these were im­ages that had al­ready ap­peared on so­cial me­dia and had been flagged by users or by Face­book al­go­rithms for pos­si­bly vi­o­lat­ing a site’s “com­mu­nity stan­dards,” a neb­u­lous term that seems to mean “stuff that could get us in trou­ble with some­one,” like a gov­ern­ment or a co­hort of users.

Those stan­dards, like much that has been cre­ated in Sil­i­con Val­ley, grow out of a hands-off, re­spon­si­bil­ity-shun­ning, lib­er­tar­ian ethos. “We’re go­ing to al­low peo­ple to go right up to the edge [of what’s ac­cept­able] and we’re go­ing to al­low other peo­ple to re­spond,” Tim Spara­pani, Face­book’s di­rec­tor of pub­lic pol­icy from 2009 to 2011, tells the jour­nal­ist James Ja­coby, whose two-part Front­line doc­u­men­tary, The Face­book Dilemma, of­fers the best back­ground yet for ev­ery­thing we’ve been read­ing and hear­ing about the com­pany’s dere­lic­tions these past few years. “We had to set up some ground rules,” Spara­pani con­tin­ues. “Ba­sic de­cency, no nu­dity, and no vi­o­lent or hate­ful speech. And af­ter that we felt some re­luc­tance to in­ter­pose our value sys­tem on this world­wide com­mu­nity that was grow­ing.” Yet these rules are si­mul­ta­ne­ously so vague and so ex­act­ing that con­tent mod­er­a­tors view­ing the fa­mous im­age from the Viet­nam War of a naked girl run­ning down the road dur­ing a na­palm at­tack chose to delete it be­cause she wasn’t wear­ing clothes. They are con­tin­u­ally re­mov­ing videos from Syria posted by the artist and pho­tog­ra­pher Khaled Barakeh—who has left his home­land for Ber­lin and, in the ab­sence of tra­di­tional jour­nal­ism, uses his Face­book page as an ad hoc clear­ing­house of in­for­ma­tion about the war there—be­cause the rules make no dis­tinc­tion be­tween in­tent and con­tent. At the same time, neo-Nazi and ISIS re­cruit­ment videos re­main on Face­book, which also re­cently hosted an auc­tion for a child bride in South Su­dan. Face­book ex­ec­u­tives had lit­tle prob­lem al­low­ing Don­ald Trump’s egre­gious, race-bait­ing com­ments about Mus­lims to be broad­cast on the site dur­ing his pres­i­den­tial cam­paign be­cause, as The New York Times re­ported in its chill­ing re­cent ex­posé, they de­cided “that the can­di­date’s views had pub­lic value.”* This, per­haps, should not have been sur­pris­ing. Hate speech, pro­pa­ganda, and in­cite­ments to vi­o­lence have found a home on a site whose de­vel­op­ers pride them­selves on both “con­nect­ing the world” and up­hold­ing “free speech.” If it wasn’t ob­vi­ous be­fore, this be­came un­mis­tak­ably clear in the days and weeks fol­low­ing the Arab Spring in 2011, when an­tidemo­cratic forces in Egypt used Face­book to spread dis­in­for­ma­tion and in­cite sec­tar­ian vi­o­lence. “The hard­est part for me was see­ing the tool that brought us to­gether tear­ing us apart,” says Wael Ghonim, the Google em­ployee whose Face­book page was widely cred­ited with driv­ing the prodemoc­racy move­ment. “These tools are just en­ablers for whomever. They don’t sep­a­rate be­tween what’s good and bad. They just look at en­gage­ment metrics.” Since then, Face­book has been used to abet geno­cide in Myan­mar, In­dia, and Sri Lanka, as well as in Nige­ria, where the com­pany has just four “fact check­ers” to as­sess con­tent on a plat­form used by twenty-four mil­lion Nige­ri­ans.

Face­book’s re­sponse to these atroc­i­ties has been at best muted. The party line, ar­tic­u­lated by em­ployee af­ter em­ployee to Ja­coby in the Front­line se­ries, is that the com­pany was “too slow” to rec­og­nize the ways in which the plat­form could be, and had been, used ma­li­ciously. This in­cludes its re­sponse to in­ter­fer­ence in the US pres­i­den­tial elec­tion, when Rus­sian op­er­a­tives seeded di­vi­sive con­tent through­out Face­book on gun rights and gay rights and other hot-but­ton is­sues. As the Times re­porters point out, CEO Mark Zucker­berg’s ini­tial, aw­shucks de­nial a month af­ter the elec­tion—he said that he couldn’t imag­ine this made-in-a-col­lege-dorm-room cre­ation of his had that much in­flu­ence— gave way to more con­certed ef­forts within the com­pany to down­play Face­book’s part in dis­sem­i­nat­ing pro­pa­ganda and ill-will. Its April 2017 pa­per high­light­ing the find­ings of the com­pany’s in­ter­nal in­ves­ti­ga­tion into elec­tion med­dling never men­tions Rus­sia, even though the com­pany was aware of the Rus­sian in­flu­ence cam­paign. Five months later, in a com­pany blog post, Face­book con­tin­ued to min­i­mize its in­flu­ence, claim­ing that the to­tal cost of Rus­sian ads on the plat­form was a mere $100,000, for about three thou­sand ads. Fi­nally, in Oc­to­ber 2017, the com­pany ad­mit­ted that close to 126 mil­lion peo­ple had seen the Rus­sian Face­book ads. Such pre­var­i­ca­tions are the Face­book way. As The New York Times has re­ported, the com­pany has con­tin­ued to share user data, in­clud­ing pri­vate mes­sages, with third par­ties like Net­flix and Spo­tify, even af­ter claim­ing nu­mer­ous times that it had stopped the prac­tice. It also gave ac­cess to the Rus­sian search firm Yan­dex, which is re­puted to have ties to Rus­sian in­tel­li­gence. This past Novem­ber, af­ter the Times re­vealed that the com­pany had hired the Repub­li­can op­po­si­tion re­search firm De­fin­ers Pub­lic Af­fairs to, among other things, cir­cu­late un­true sto­ries that the phi­lan­thropist Ge­orge Soros had a fi­nan­cial in­ter­est in pub­licly crit­i­ciz­ing Face­book—sto­ries that fed into the anti-Semitic memes about Soros that cir­cu­late on so­cial me­dia— its top two ex­ec­u­tives, Zucker­berg and Chief Op­er­at­ing Of­fi­cer Sh­eryl Sand­berg, claimed that they had no idea that this had hap­pened.

“I did not know we hired them or about the work they were do­ing,” Sand­berg wrote in a blog post, chal­leng­ing the ve­rac­ity of the Times ar­ti­cle. But a week later, in a new post, she re­canted, ad­mit­ting that, ac­tu­ally, “some of their work was in­cor­po­rated into ma­te­ri­als pre­sented to me and I re­ceived a small num­ber of emails where De­fin­ers was ref­er­enced.” (Even­tu­ally it came out that af­ter Soros’s par­tic­u­larly fierce cri­tique of so­cial me­dia at the World Eco­nomic Fo­rum in Jan­uary 2017, Sand­berg had or­dered an in­ves­ti­ga­tion into whether the fi­nancier was short­ing Face­book stock, though De­fin­ers’ work for Face­book be­gan be­fore that.) Sand­berg ap­peared to be fol­low­ing the Zucker­berg play­book: “I think it’s more use­ful to, like, make things hap­pen and then, like, apol­o­gize later, than it is to make sure that you dot all your I’s now and then, like, just not get stuff done,” he says in The Face­book Dilemma. Over the years, he and Sand­berg have done a lot of apol­o­giz­ing.

They have also got­ten a lot of stuff done since 2008, when Zucker­berg hired Sand­berg away from Google to shore up and run the busi­ness side of the com­pany. Un­til then, Face­book had fo­cused on build­ing its user base, but Sand­berg’s ar­rival brought a more am­bi­tious pur­suit, a con­tin­u­a­tion of her work at Google: to turn Face­book into a colos­sal ad­ver­tis­ing plat­form by har­vest­ing the in­nu­mer­able bits of per­sonal data peo­ple were post­ing and shar­ing on the site. To lure ad­ver­tis­ers, Sand­berg’s team de­vel­oped new ways to ob­tain per­sonal data from users as they tra­versed the In­ter­net. They also col­lect data from peo­ple who are not Face­book users but who hap­pen to visit In­ter­net sites that use Face­book’s tech­nol­ogy. To this in­for­ma­tion they added data pur­chased from bro­kers like Acx­iom and Ex­pe­rian, which fur­ther re­fined Face­book’s abil­ity to track peo­ple when they weren’t on­line, and to parse in­di­vid­u­als with in­creas­ing speci­ficity, en­abling ever-more-tar­geted ads. In an ex­am­ple of how Face­book con­tin­ues to cash in on this data, a few days af­ter the re­cent Pitts­burgh sy­n­a­gogue shoot­ing—in which eleven con­gre­gants were mur­dered by Robert Bow­ers, whose page on the so­cial me­dia site Gab was filled with anti-Semitic rants—The In­ter­cept found that Face­book al­lowed ad­ver­tis­ers to send ads to peo­ple who had ex­pressed an in­ter­est in “white geno­cide con­spir­acy the­ory,” a cat­e­gory with 168,000 po­ten­tial mem­bers.

For Face­book’s busi­ness model to work, the data stream has to flow ro­bustly, and it has to keep grow­ing. Zucker­berg’s

mantra, re­peated over and over, that the goal of Face­book was “to con­nect the world” turns out not to be about cre­at­ing a bor­der­less dig­i­tal utopia where the whole world gets along, but about en­sur­ing the com­pany’s bot­tom line. “The [Face­book] Growth team had tons of en­gi­neers fig­ur­ing out how you could make the new user ex­pe­ri­ence more en­gag­ing, how you could fig­ure out how to get more peo­ple to sign up,” Face­book’s for­mer op­er­a­tions man­ager, Sandy Parak­i­las, tells Front­line. “Ev­ery­one was fo­cused on growth, growth, growth.”

While the for­mula they came up with was quite sim­ple—growth is a func­tion of en­gage­ment—it so hap­pened that en­gage­ment was best served by cir­cu­lat­ing sen­sa­tional, di­vi­sive, and sala­cious con­tent. Al­low­ing dis­cor­dant and false ma­te­rial on the plat­form was not a gl­itch in the busi­ness plan—it was the plan. In the United States, at least, Face­book was able to take cover be­hind Sec­tion 230 of the Com­mu­ni­ca­tions De­cency Act, which ba­si­cally says that a plat­form provider is not re­spon­si­ble for the ma­te­rial dis­sem­i­nated on its plat­form, or for its con­se­quences. It is also what has en­abled Face­book to pub­lish first and delete sec­ond—or not at all. If Zucker­berg, Sand­berg, and other Face­book em­ploy­ees were un­aware that their plat­form could be hi­jacked by ma­li­cious ac­tors be­fore the Arab Spring, and if, af­ter­ward, they failed to hear alarm bells ring­ing, it was be­cause they were hold­ing their hands over their ears. From 2012 to 2015, an­a­lysts at the De­fense Ad­vanced Re­search Projects Agency (DARPA), the re­search arm of the Depart­ment of De­fense, pub­lished more than two hun­dred pa­pers and re­ports de­tail­ing the kinds of ma­nip­u­la­tion and dis­in­for­ma­tion they were see­ing on Face­book and other so­cial me­dia. Around the same time, the In­ter­net Re­search Agency, the Rus­sian pro­pa­ganda fac­tory that was ac­tive on so­cial me­dia dur­ing the 2016 US pres­i­den­tial elec­tion, was hon­ing its craft in Ukraine, send­ing out all kinds of false and in­flam­ma­tory sto­ries over Face­book, pro­vok­ing a long-sim­mer­ing eth­nic con­flict in an ef­fort to frac­ture the coun­try from within. “The re­sponse that Face­book gave us is, ‘Sorry we are an open plat­form. Any­body can do any­thing . . .within our pol­icy, which is writ­ten on the web­site,’” Dmytro Shymkiv, an ad­viser to Ukrainian pres­i­dent Petro Poroshenko, told Front­line. “And when I said, ‘But this is fake ac­counts, you could ver­ify that,’ [they said,] ‘Well, we’ll think about this, but you know, we have free­dom of speech and we are a very pro-democ­racy plat­form. Ev­ery­body can say any­thing.’” By now it should be ob­vi­ous that Face­book’s so-called pro-democ­racy rhetoric has been fun­da­men­tally dam­ag­ing to real democ­ra­cies and to demo­cratic move­ments around the world. It has also di­rectly ben­e­fited au­thor­i­tar­ian regimes, which have re­lied on the plat­form to spread un­truths in or­der to con­trol and ma­nip­u­late their cit­i­zens. In the Philip­pines, as con­tent mod­er­a­tors busily re­move posts and pic­tures ac­cord­ing to a be­spoke met­ric de­vel­oped by “mostly twenty-some­thing-year-olds” in Menlo Park, California, the pres­i­dent, Ro­drigo Duterte, is busy on Face­book too, us­ing paid fol­low­ers to spread false­hoods about his crit­ics and his poli­cies. The jour­nal­ist Maria Ressa, whose news or­ga­ni­za­tion, Rap­pler, has been keep­ing a data­base of the more than twelve mil­lion Face­book ac­counts that have at­tacked crit­ics of Duterte and have been traced back to the pres­i­dent, has been a tar­get of those ac­counts as well, at one point get­ting as many as ninety hate mes­sages an hour via Face­book—mes­sages like “I want Maria Ressa to be raped re­peat­edly to death.”

Face­book fa­vors demo­cratic norms se­lec­tively—when it is fi­nan­cially ex­pe­di­ent—and aban­dons them when it’s not. Face­book’s gen­eral coun­sel, Colin Stretch, de­scribed it to the Se­nate Se­lect Com­mit­tee on In­tel­li­gence this way:

We do have many in­stances where we have con­tent re­ported to us from for­eign gov­ern­ments that is il­le­gal un­der the laws of those gov­ern­ments .... We de­ploy what we call geoblock­ing or IP block­ing, so that the con­tent will not be vis­i­ble in that coun­try.

This is best il­lus­trated by the com­pany’s ac­tions in Turkey, where, ac­cord­ing to Ya­man Ak­d­eniz, a law pro­fes­sor at Istanbul Bilgi Univer­sity, “Face­book re­moves ev­ery­thing and any­thing from their so­cial me­dia plat­form when the Turk­ish author­i­ties ask them to do so.” If they don’t, he says, the com­pany will be blocked and lose busi­ness.

While Face­book is cur­rently shut out of the Chi­nese mar­ket, the com­pany has not ruled out find­ing a way to op­er­ate there in spite of the coun­try’s ro­bust cen­sor­ship laws, and last sum­mer it es­tab­lished a Chi­nese sub­sidiary. But per­haps most telling was an ex­change be­tween Zucker­berg and Ressa that she re­counted dur­ing an in­ter­view with Re­code’s Kara Swisher. Ressa was ex­plain­ing to Zucker­berg how crit­ics of the Duterte regime were be­ing threat­ened on Face­book with calls for them to be raped and killed:

I said, “Mark, 97 per­cent of Filipinos on the In­ter­net are on Face­book.” I in­vited him to come to the Philip­pines be­cause he had to see the im­pact of this. You have to un­der­stand the im­pact .... He was frown­ing while I was say­ing that. I said, “Why, why?” He said, “Oh well. What are the other 3 per­cent do­ing, Maria?”

Ninety-seven per­cent is a use­ful statis­tic to keep in mind while lis­ten­ing to Monika Bick­ert, Face­book’s head of global pol­icy man­age­ment, ex­plain in The Face­book Dilemma that “prob­a­bly the group that holds us the most ac­count­able are the peo­ple us­ing the ser­vice. If it’s not a safe place for them to come and com­mu­ni­cate, they are not go­ing to use it.” But in coun­tries like the Philip­pines and Myan­mar, where the vast ma­jor­ity of peo­ple ac­cess the In­ter­net through Face­book, not us­ing the plat­form is likely not an op­tion. In­deed, es­tab­lish­ing an equiv­a­lence be­tween Face­book and the In­ter­net is one of the pay­offs of Free Ba­sics, an app Face­book cre­ated that pro­vides pur­pose­fully lim­ited In­ter­net ac­cess— there is no stand-alone e-mail server and Face­book is the only so­cial me­dia plat­form—to peo­ple in de­vel­op­ing coun­tries who wouldn’t oth­er­wise be able to af­ford to go on­line. Of course, Face­book cap­tures user data since all user ac­tiv­ity passes through its servers. (Free Ba­sics is avail­able in the Philip­pines and Nige­ria, among many other coun­tries. In­dia, how­ever, banned it af­ter protests ac­cus­ing the com­pany of cul­tural im­pe­ri­al­ism and dig­i­tal colo­nial­ism.) But even if those who feel un­safe were to leave Face­book, as Bik­ert sug­gests, they re­main vul­ner­a­ble to the vi­o­lence be­ing fo­mented against them on the plat­form—vi­o­lence that, as we have seen, even in this coun­try, can­not be se­questered on­line.

“We are work­ing with part­ners . . . to an­a­lyze po­ten­tially harm­ful con­tent and un­der­stand how it spreads in Myan­mar,” the com­pany wrote in early Novem­ber 2018, in re­sponse to a re­port it com­mis­sioned about its part in the geno­cide there. “We also just ex­tended the use of artificial in­tel­li­gence to posts that con­tain graphic vi­o­lence and com­ments that are vi­o­lent and de­hu­man­iz­ing, and will re­duce their dis­tri­bu­tion while they un­dergo review by our Com­mu­nity Op­er­a­tions team.” The in­suf­fi­ciency of these ex-post-facto strate­gies should be ob­vi­ous: they are trig­gered by po­ten­tially dan­ger­ous con­tent, but they can­not pre­empt it.

Nor do Face­book’s well-pub­li­cized ef­forts to re­move vi­o­lent and hate­ful pages and in­di­vid­u­als rid the plat­form of vi­o­lence or hate, since it con­tin­ues to al­low pri­vate and se­cret Face­book groups where malev­o­lent ac­tors can or­ga­nize and am­plify their mes­sage with lit­tle over­sight and no ad­her­ence to “com­mu­nity stan­dards.” Such are the con­se­quences of the com­pany’s so­called pro-democ­racy ide­ol­ogy. Even more, this is what hap­pens when a for­profit tech com­pany with do­min­ion over two bil­lion peo­ple has lit­tle will and less ex­per­tise to gov­ern or be gov­erned. It might have seemed that the 2016 US pres­i­den­tial elec­tion was a turn­ing point. The ev­i­dence—de­spite Face­book’s dis­tor­tions—was clear: the plat­form was used by Rus­sian op­er­a­tives to sow dis­cord, and, as the Trump cam­paign also did, to dis­suade AfricanAmer­i­cans from vot­ing. In re­sponse, the com­pany in­sti­tuted a new po­lit­i­cal ad­ver­tis­ing pol­icy, en­acted in time for the 2018 midterms, in­tended to pre­vent for­eign na­tion­als from buy­ing ads and pro­mot­ing con­tent de­signed to sway the elec­torate. The pol­icy re­quires any­one pur­chas­ing a po­lit­i­cal ad to pro­vide doc­u­men­ta­tion that they are an Amer­i­can cit­i­zen, and for each ad to re­veal its prove­nance. But be­yond that, Face­book does not re­quire or check to see that the per­son who man­ages the ad is the pur­chaser of the ad. An in­ves­ti­ga­tion by ProP­ub­lica un­cov­ered a dozen ad cam­paigns paid for by nonex­is­tent com­pa­nies cre­ated by busi­nesses and in­di­vid­u­als, in­clud­ing fos­sil fuel and in­surance com­pa­nies, to hide their fun­ders. And Jonathan Al­bright, a pro­fes­sor at Columbia’s Tow Cen­ter for Dig­i­tal Jour­nal­ism, found “po­lit­i­cal fund­ing groups be­ing man­aged by ac­counts based out­side the United States.”

Would gov­ern­ment reg­u­la­tion be more ex­act­ing? For the time be­ing, there is no way to know. In April, in tes­ti­mony be­fore Congress, Zucker­berg told Sen­a­tor Amy Klobuchar that he would sup­port the Hon­est Ads Act, a bi­par­ti­san ef­fort to en­sure full dis­clo­sure of the money be­hind po­lit­i­cal ads on the In­ter­net. But be­hind the scenes, his com­pany was lob­by­ing hard to kill the bill. One rea­son, ac­cord­ing to a con­gres­sional staffer in­ter­viewed by Quartz, is that Face­book felt it was vol­un­tar­ily do­ing what the law would re­quire, though this ap­pears to be an overly op­ti­mistic—or ar­ro­gant or ig­no­rant—as­sess­ment of its own ef­forts. “Face­book is an ide­al­is­tic and op­ti­mistic com­pany,” Zucker­berg said in his pre­pared con­gres­sional tes­ti­mony that day in April. More re­cently he told his col­leagues that the com­pany is “at war,” and vowed to adopt a more ag­gres­sive man­age­ment style. The Face­book dilemma, go­ing for­ward, is not how to rec­on­cile the two. It’s that no mat­ter how op­ti­mistic its out­look or ob­du­rate its leader, an on­line busi­ness that pub­lishes first and mod­er­ates later will al­ways be avail­able to those who aim to do real harm in the real world. —De­cem­ber 19, 2018

Mark Zucker­berg tes­ti­fy­ing at a Se­nate hear­ing about Face­book’s use of user data, Wash­ing­ton, D.C., April 2018

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.