Can any­one stop the hate?

The Week (US) - - 18 News -

On­line hate is boil­ing over into real-world vi­o­lence, said in Globe. Robert Bow­ers, ac­cused of mur­der­ing 11 peo­ple at a Pitts­burgh sy­n­a­gogue last month, left a stream of anti-Semitic mes­sages on Gab, a so­cial net­work fa­vored by far-right ex­trem­ists. Ce­sar Sayoc, the Flor­ida man who al­legedly mailed bombs to lead­ing Democrats, had been re­ported for mak­ing death threats on Twit­ter. The in­ci­dents high­light the big­gest chal­lenge fac­ing so­cial me­dia firms: “What to do about the threats and abuse that pol­lute their plat­forms.” Face­book and Twit­ter have tried to use al­go­rithms to crack down on on­line vit­riol, but those ef­forts have merely “high­lighted the lim­i­ta­tions of to­day’s tech­nol­ogy.” So far, “al­go­rithms have proved no match for the nu­ance of hu­man lan­guage.” Face­book has hired 7,500 hu­mans to mod­er­ate con­tent, but the dif­fi­culty for these em­ploy­ees is de­cid­ing “what is ac­cept­able and what is not.” What one per­son views as de­mean­ing, an­other may see as po­lit­i­cal speech that’s worth pro­tect­ing. Plat­forms with strict rules also run the risk of driv­ing hate­ful users to fringe ser­vices such as Gab, where it’s harder for so­ci­ety “to track the threat or reckon with it.”

Tech com­pa­nies have had some suc­cess in tack­ling hate speech and ex­trem­ism, said Patrick Tucker in De­fenseOne.com. Back in 2014, the prob­lem was “con­tent from ex­trem­ists of a dif­fer­ent sort: vi­o­lent ji­hadist groups such as ISIS.” Face­book be­gan em­ploy­ing con­trac­tors to track ji­hadist con­tent in ex­trem­ist chat rooms so that they’d be ready to cen­sor the ma­te­rial when it ap­peared on the plat­form. In­tel­li­gence sources also tell Face­book, “in as close to real time as pos­si­ble, when bad con­tent is be­ing re­leased,” says Erin Marie Salt­man, a coun­tert­er­ror­ism ex­pert at the com­pany. So­cial me­dia firms must adopt sim­i­lar tac­tics for do­mes­tic ex­trem­ism. Many tech com­pa­nies pay boun­ties to pro­gram­mers who find bugs in their code, said Ina Fried in Ax­ios.com. Why not do the same for users who re­port hate speech? Tech needs to de­vote the same en­ergy to “min­i­miz­ing hate and ha­rass­ment” as it does to boost­ing prof­its.

There’s a chance that in more-de­vel­oped coun­tries “things will sta­bi­lize,” said Ryan Brod­er­ick in Buz­zFeedNews.com. Wealth­ier con­sumers in those na­tions now in­creas­ingly get their news from re­li­able sources lo­cated be­hind pay­walls. Oth­ers still make do with “al­go­rith­mi­cally served memes, poorly ag­gre­gated news ar­ti­cles, and YouTube videos.” So­cial me­dia in­un­dates users with anti-Mus­lim videos in Myan­mar and Hindu na­tion­al­ist pro­pa­ganda in In­dia. Wor­ry­ingly, things could get worse, said Joan Sols­man in CNET.com. Deep fakes—ma­nip­u­lated videos that can “turn al­most any­body into an au­dio­vi­sual pup­pet”—haven’t yet sur­faced in the U.S., but it’s only a mat­ter of time be­fore they do. “Ask an ex­pert about es­cap­ing fake news in your so­cial feed and you’ll get a bleak re­sponse: You can’t.”

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.