Google to hire thou­sands of mod­er­a­tors af­ter out­cry over Youtube abuse videos

The com­pany, which owns Youtube, has en­dured a stream of neg­a­tive press over vi­o­lent and of­fen­sive con­tent

The Myanmar Times - Weekend - - Weekend | Tech - Photo: Shutterstock

GOOGLE is hir­ing thou­sands of new mod­er­a­tors af­ter fac­ing wide­spread crit­i­cism for al­low­ing child abuse videos and other vi­o­lent and of­fen­sive con­tent to flour­ish on Youtube.

Youtube’s owner an­nounced on Mon­day that next year it would ex­pand its to­tal work­force to more than 10,000 peo­ple re­spon­si­ble for re­view­ing con­tent that could vi­o­late its poli­cies. The news from Youtube’s CEO, Su­san Wo­j­ci­cki, fol­lowed a steady stream of neg­a­tive press sur­round­ing the site’s role in spread­ing ha­rass­ing videos, mis­in­for­ma­tion, hate speech and con­tent that is harm­ful to chil­dren.

Wo­j­ci­cki said that in ad­di­tion to an in­crease in hu­man mod­er­a­tors, Youtube is con­tin­u­ing to de­velop ad­vanced ma­chine-learn­ing tech­nol­ogy to au­to­mat­i­cally flag prob­lem­atic con­tent for re­moval. The com­pany said its new ef­forts to pro­tect chil­dren from dan­ger­ous and abu­sive con­tent and block hate speech on the site were mod­eled af­ter the com­pany’s on­go­ing work to fight vi­o­lent ex­trem­ist con­tent.

“Hu­man re­view­ers re­main es­sen­tial to both re­mov­ing con­tent and train­ing ma­chine learn­ing sys­tems be­cause hu­man judg­ment is crit­i­cal to mak­ing con­tex­tu­alised de­ci­sions on con­tent,” the CEO wrote in a blog­post, say­ing that mod­er­a­tors have man­u­ally re­viewed nearly 2m videos for vi­o­lent ex­trem­ist con­tent since June, help­ing train ma­chine-learn­ing sys­tems to iden­tify sim­i­lar footage in the fu­ture.

In re­cent weeks, Youtube has used ma­chine learn­ing tech­nol­ogy to help hu­man mod­er­a­tors find and shut down hun­dreds of ac­counts and hun­dreds of thou­sands of com­ments, ac­cord­ing to Wo­j­ci­cki.

Youtube faced height­ened scru­tiny last month in the wake of re­ports that it was al­low­ing vi­o­lent con­tent to slip past the Youtube Kids fil­ter, which is sup­posed to block any con­tent that is not ap­pro­pri­ate to young users. Some par­ents re­cently dis­cov­ered that Youtube Kids was al­low­ing chil­dren to see videos with fa­mil­iar char­ac­ters in vi­o­lent or lewd sce­nar­ios, along with nurs­ery rhymes mixed with dis­turb­ing im­agery, ac­cord­ing to the New York Times.

Other re­ports un­cov­ered “ver­i­fied” chan­nels fea­tur­ing child ex­ploita­tion videos, in­clud­ing vi­ral footage of scream­ing chil­dren be­ing mock-tor­tured and we­b­cams of young girls in re­veal­ing cloth­ing.

Youtube has also re­peat­edly sparked out­rage for its role in per­pet­u­at­ing mis­in­for­ma­tion and ha­rass­ing videos in the wake of mass shoot­ings and other na­tional tragedies. The Guardian found that sur­vivors and the rel­a­tives of vic­tims of nu­mer­ous shoot­ings have been sub­ject to a wide range of on­line abuse and threats, some tied to pop­u­lar con­spir­acy the­ory ideas fea­tured promi­nently on Youtube.

Some par­ents of peo­ple killed in high­pro­file shoot­ings have spent count­less hours try­ing to re­port abu­sive videos about their de­ceased chil­dren and have re­peat­edly called on Google to hire more mod­er­a­tors and to bet­ter en­force its poli­cies. It’s un­clear, how­ever, how the expansion of mod­er­a­tors an­nounced on Mon­day might af­fect this kind of con­tent, since Youtube said it was fo­cused on hate speech and child safety.

Al­though the re­cent scan­dals have il­lus­trated the cur­rent lim­its of the al­go­rithms in de­tect­ing and re­mov­ing vi­o­lat­ing con­tent, Wo­j­ci­cki made clear that Youtube would con­tinue to heav­ily rely on ma­chine learn­ing, a nec­es­sary fac­tor given the scale of the prob­lem.

Youtube said ma­chine learn­ing was help­ing its hu­man mod­er­a­tors re­move nearly five times as many videos as they were pre­vi­ously, and that 98% of videos re­moved for vi­o­lent ex­trem­ism are now flagged by al­go­rithms. Wo­j­ci­cki claimed that ad­vances in the tech­nol­ogy al­lowed the site to take down nearly 70% of vi­o­lent ex­trem­ist con­tent within eight hours of it be­ing up­loaded.

The state­ment also said Youtube was re­form­ing its ad­ver­tis­ing poli­cies, say­ing it would ap­ply stricter cri­te­ria, con­duct more man­ual cu­ra­tion and ex­pand its team of ad re­view­ers. Last month, a num­ber of high­pro­file brands sus­pended Youtube and Google ad­ver­tis­ing af­ter re­ports re­vealed that they were placed along­side videos filled with ex­ploita­tive and sex­u­ally ex­plicit com­ments about chil­dren.

In March, a num­ber of cor­po­ra­tions also pulled their Youtube ads af­ter learn­ing that they were linked to videos with hate speech and ex­trem­ist con­tent.

A wo­man uses a tablet.

Newspapers in English

Newspapers from Myanmar

© PressReader. All rights reserved.