If you’re look­ing for more ev­i­dence that Face­book will de­stroy us all, you won’t find it here. There are flaws in the ma­chine, but it’s not too late to fix them.

Pasatiempo - - IN OTHER WORDS - The Guardian

seem­ingly any bit of in­for­ma­tion that has come to light re­gard­ing con­tent mod­er­a­tion at a whole slew of past and present plat­forms, in ad­di­tion to con­duct­ing in­ter­views with de­ci­sion mak­ers and end users, read­ing through the posted com­mu­nity guide­lines of the many plat­forms (he is per­haps the only per­son to have ever done so), and scru­ti­niz­ing the “Face­book files,” doc­u­ments leaked to last year that con­tain in­struc­tions on what Face­book mod­er­a­tors should keep or re­move. (Keep: a pic­ture of ex­trem­ists with the cap­tion, “They should be out play­ing .” Cut: a pic­ture of ex­trem­ists with the cap­tion, “A great day.”) What he un­cov­ers is a se­ries of mod­er­a­tion sys­tems that are driven by the eco­nom­ics of keep­ing users on a par­tic­u­lar site, set by peo­ple in Sil­i­con Val­ley who “tend to build tools ‘for all’ that con­tinue, ex­tend, and reify the in­equities they over­look.” Mod­er­a­tion is per­formed by a com­bi­na­tion of hu­mans and AI de­tec­tion tools. The for­mer in­tro­duces is­sues re­gard­ing wages for work­ers in places like In­dia and the Philip­pines, not to men­tion the psy­cho­log­i­cal trauma of star­ing at hor­rific images and texts. Ma­chine-learn­ing al­go­rithms still have a ways to go be­fore they can de­tect prob­lems without hu­man over­sight, and they raise their own con­cerns: “Ma­chine-learn­ing tech­niques are in­her­ently con­ser­va­tive. The faith in so­phis­ti­cated pat­tern recog­ni­tion that un­der­lies them is built on as­sump­tions about peo­ple: that peo­ple who demon­strate sim­i­lar ac­tions or say sim­i­lar things are sim­i­lar, that peo­ple who have acted in a cer­tain way in the past are likely to con­tinue, that as­so­ci­a­tion sug­gests guilt.”

Even so, if you’re look­ing for more ev­i­dence that Face­book will de­stroy us all, you won’t find it here. There are flaws in the ma­chine, but it’s not too late to fix them. Gille­spie of­fers ex­plicit guid­ance on how to do that. For in­stance, “Plat­forms should make a rad­i­cal com­mit­ment to turn­ing the data they al­ready have back to me in a leg­i­ble and ac­tion­able form, ev­ery­thing they could tell me con­tex­tu­ally about why a post is there and how I should assess it.” But not all of Gille­spie’s guid­ance is di­rected at Sil­i­con Val­ley. All so­cial me­dia users are, to an ex­tent, cus­to­di­ans of the in­ter­net, in that all en­gage with al­go­rith­mi­cally de­signed news feeds. Some of us flag con­tent we deem of­fen­sive; some of us block ob­nox­ious (or worse) users. And all of us, re­gard­less of whether or not we use these plat­forms, are in­flu­enced by the role of so­cial me­dia in pub­lic dis­course. Among Gille­spie’s con­clu­sions is a call to ac­tion for us all: “We des­per­ately need a thor­ough, pub­lic dis­cus­sion about the so­cial re­spon­si­bil­ity of plat­forms.”

— Grace Paraz­zoli

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.