CodeS­port

In this month’s col­umn, we dis­cuss the prob­lem of con­tent moderation and com­pli­ance check­ing us­ing NLP tech­niques.

OpenSource For You - - Contents -

We have been dis­cussing the ma­chine read­ing com­pre­hen­sion task over the last cou­ple of months. This month, we take a break from that dis­cus­sion and fo­cus on a real life prob­lem, which NLP (nat­u­ral lan­guage pro­cess­ing) can help solve. Let us start with a ques­tion to our read­ers. We all know what the coolest job in in­for­ma­tion tech­nol­ogy is these days. As Hal Va­le­rian, Google’s lead statis­ti­cian re­marked a few years back, it is the job of the data sci­en­tist (https://hbr.org/2012/10/data-sci­en­tist-the-sex­i­estjob-of-the-21st-cen­tury). But do you know what the worst tech­nol­ogy job is? Well, it is that of the con­tent mod­er­a­tors on so­cial me­dia sites such as Face­book or YouTube. Their job is to con­stantly sift through the user gen­er­ated con­tent (UGC) get­ting posted on the web­sites and fil­ter out con­tent that is abu­sive.

Con­tent moderation re­quires analysing a wide va­ri­ety of user gen­er­ated con­tent – blogs, emails on com­mu­nity fo­rums, news/ar­ti­cles posted on so­cial me­dia sites, tweets, videos, photos, and even on­line games. Con­tent mod­er­a­tors need to iden­tify un­suit­able/abu­sive con­tent and en­sure that it is taken down quickly.

Con­tent moderation has gained a lot of pub­lic at­ten­tion last year, when a user posted live videos of a killing, on Face­book. Sites such as YouTube and Face­book em­ploy a large num­ber of hu­man con­tent mod­er­a­tors whose job is to en­sure that abu­sive/il­le­gal con­tent is blocked from pub­lic view­ing. This in­cludes fil­ter­ing out any­thing porno­graphic, vi­o­lent visu­als or lan­guage, ex­ploita­tive images of mi­nors, the so­lic­it­ing of sex­ual favours, racist com­ments, etc, from the text, video or au­dio tracks posted on the In­ter­net. How­ever, per­form­ing this task leads to enor­mous stress and burnout among the hu­man con­tent mod­er­a­tors. There have even been cases of post-trau­matic stress dis­or­der (PTSD) be­ing preva­lent among peo­ple work­ing in this space. In ad­di­tion, as the vol­ume of UGC on the In­ter­net in­creases ex­po­nen­tially, hu­man moderation can­not scale and of­ten be­comes er­ror prone.

There are two ba­sic types of con­tent moderation – re­ac­tive and proac­tive. In re­ac­tive con­tent moderation, the fil­ter­ing hap­pens off­line, in the sense that af­ter the con­tent is posted, mod­er­a­tors scan it and de­cide whether it is ac­cept­able or not. In proac­tive con­tent moderation, as soon as the con­tent is sub­mit­ted, it is an­a­lysed for any ob­jec­tion­able con­tent in real-time, be­fore it gets posted.

Given the typ­i­cal need for real-time fil­ter­ing of ob­jec­tion­able con­tent on the large so­cial me­dia sites, hu­man moderation ef­forts lack the abil­ity to pre­vent ob­jec­tion­able con­tent from get­ting posted for pub­lic view­ing, on time. Due to the is­sues as­so­ci­ated with hu­man con­tent moderation, there has been a trend to­wards au­to­mated ap­proaches for on­line con­tent moderation. Large In­ter­net sites such as Face­book and YouTube have in­vested heav­ily in de­vel­op­ing ma­chine learn­ing/AI based tools for au­to­matic con­tent moderation. While con­tent moderation is ap­pli­ca­ble to mul­ti­ple me­dia such as video, text and speech, in this col­umn, we fo­cus on the prob­lem of con­tent moderation for text.

What­ever be the form of the con­tent, we first need to un­der­stand what makes this prob­lem chal­leng­ing.

Let us first con­sider text. The ob­vi­ous ap­proach is to cre­ate a lex­i­con of words, which are as­so­ci­ated with abu­sive, hate­ful and ob­jec­tion­able text. Given this lex­i­con, it is straight­for­ward to flag ob­jec­tion­able con­tent. Yet, why doesn’t this ap­proach work? There are a num­ber of rea­sons for this. First and fore­most, peo­ple who cre­ate and post ob­jec­tion­able con­tent al­ways look for ways to cir­cum­vent the con­tent moderation scheme. For in­stance, if your ob­jec­tion­able con­tent lex­i­con con­tains the word ‘bull­shit’, the sub­mit­ter can use ‘bulls**t’ to fool the lex­i­con. While sim­pler reg­u­lar ex­pres­sions can be caught by in­tel­li­gent pro­cess­ing, peo­ple also cir­cum­vent moderation by us­ing an in­nocu­ous-sound­ing word in­stead of the ob­jec­tion­able one (for in­stance us­ing ‘grape’ in place of ‘rape’). Hence sim­ple lex­i­con based sys­tems are eas­ily cir­cum­vented by in­tel­li­gent work­arounds.

Sandya Man­nar­swamy

Newspapers in English

Newspapers from India

© PressReader. All rights reserved.