Can any­one any longer be sure what to be­lieve on so­cial me­dia?

The East African - - FRONT PAGE - Joseph S. Nye is a pro­fes­sor at Har­vard and au­thor of Is the Amer­i­can Cen­tury Over? Copy­right: Project Syn­di­cate, 2018. www.project-syn­di­cate.org

Over­whelmed by the sheer vol­ume of in­for­ma­tion avail­able on­line, peo­ple find it dif­fi­cult to know what to fo­cus on. At­ten­tion, rather than in­for­ma­tion, be­comes the scarce re­source to cap­ture.”

The term “fake news” has be­come an ep­i­thet that US Pres­i­dent Don­ald Trump at­taches to any un­favourable story. But it is also an an­a­lyt­i­cal term that de­scribes de­lib­er­ate dis­in­for­mat­tion pre­sented in the form of a con­ven­tional news re­port.

The prob­lem is not com­pletely novel. In 1925, Harper’s Mag­a­zine pub­lished an ar­ti­cle about the dan­gers of “fake news.” But to­day two-thirds of Amer­i­can adults get some of their news from so­cial me­dia, which rest on a busi­ness model that lends it­self to out­side ma­nip­u­la­tion and where al­go­rithms can eas­ily be gamed for profit or ma­lign pur­poses.

Whether am­a­teur, crim­i­nal, or govern­men­tal, many or­gan­i­sa­tions – both do­mes­tic and for­eign – are skilled at re­v­erse en­gi­neer­ing how tech plat­forms parse in­for­ma­tion. To give Rus­sia credit, it was one of the first gov­ern­ments to un­der­stand how to weaponise so­cial me­dia and to use Amer­ica’s own com­pa­nies against it.

Over­whelmed by the sheer vol­ume of in­for­ma­tion avail­able on­line, peo­ple find it dif­fi­cult to know what to fo­cus on. At­ten­tion, rather than in­for­ma­tion, be­comes the scarce re­source to cap­ture. Big data and ar­ti­fi­cial in­tel­li­gence al­low mi­cro-tar­get­ing of com­mu­ni­ca­tion so that the in­for­ma­tion peo­ple re­ceive is lim­ited to a “fil­ter bub­ble” of the like-minded.

The “free” ser­vices of­fered by so­cial me­dia are based on a profit model in which users’ in­for­ma­tion and at­ten­tion are ac­tu­ally the prod­ucts, which are sold to ad­ver­tis­ers. Al­go­rithms are de­signed to learn what keeps users en­gaged so that they can be served more ads and pro­duce more rev­enue.

Emo­tions such as out­rage stim­u­late en­gage­ment, and news that is out­ra­geous but false has been shown to en­gage more view­ers than ac­cu­rate news. One study found that such false­hoods on Twit­ter were 70 per cent more likely to be retweeted than ac­cu­rate news. Like­wise, a study of demon­stra­tions in Ger­many ear­lier this year found that Youtube’s al­go­rithm sys­tem­at­i­cally di­rected users to­ward ex­trem­ist con­tent be­cause that was where the “clicks” and rev­enue were great­est.

Fact check­ing by con­ven­tional news me­dia is of­ten un­able to keep up, and some­times can even be coun­ter­pro­duc­tive by draw­ing more at­ten­tion to the false­hood.

By its na­ture, the so­cial-me­dia profit model can be weaponised by states and non-state ac­tors alike. Re­cently, Face­book has come in for heavy crit­i­cism for its cav­a­lier record on pro­tect­ing users’ pri­vacy. CEO Mark Zucker­berg ad­mit­ted that in 2016, Face­book was “not pre­pared for the co-or­di­nated in­for­ma­tion op­er­a­tions we reg­u­larly face.” The com­pany had, how­ever, “learned a lot since then and have de­vel­oped so­phis­ti­cated sys­tems that com­bine tech­nol­ogy and peo­ple to pre­vent elec­tion in­ter­fer­ence on our ser­vices.”

Such ef­forts in­clude au­to­mated pro­grams to find and re­move fake ac­counts; fea­tur­ing Face­book pages that spread dis­in­for­ma­tion less promi­nently than in the past; is­su­ing a trans­parency re­port on the num­ber of false ac­counts re­moved; ver­i­fy­ing the na­tion­al­ity of those who place po­lit­i­cal ad­ver­tise­ments; hir­ing 10,000 ad­di­tional peo­ple to work on se­cu­rity; and im­prov­ing co-or­di­na­tion with law en­force­ment and other com­pa­nies to ad­dress sus­pi­cious ac­tiv­ity. But the prob­lem is not solved.

An arms race will con­tinue be­tween the so­cial me­dia com­pa­nies and the states and non-state ac­tors who in­vest in ways to ex­ploit their sys­tems. Tech­no­log­i­cal so­lu­tions like ar­ti­fi­cial in­tel­li­gence are not a sil­ver bul­let. Be­cause it is of­ten more sen­sa­tional and out­ra­geous, fake news trav­els farther and faster than real news. False in­for­ma­tion on Twit­ter is retweeted by many more peo­ple and far more rapidly than true in­for­ma­tion, and re­peat­ing it, even in a fact-check­ing con­text, may in­crease an in­di­vid­ual’s like­li­hood of ac­cept­ing it as true.

In pre­par­ing for the 2016 US pres­i­den­tial elec­tion, the In­ter­net Re­search Agency in St Peters­burg, Rus­sia, spent more than a year cre­at­ing dozens of so­cial me­dia ac­counts mas­querad­ing as lo­cal Amer­i­can news out­lets. Some­times the re­ports favoured a can­di­date, but of­ten they were de­signed sim­ply to give an im­pres­sion of chaos and dis­gust with democ­racy, and to sup­press voter turnout. When Congress passed the Com­mu­ni­ca­tions De­cency Act in 1996, then-in­fant so­cial me­dia com­pa­nies were treated as neu­tral tele­coms providers that en­abled cus­tomers to in­ter­act with one an­other. But this model is clearly out­dated. Un­der po­lit­i­cal pres­sure, the ma­jor com­pa­nies have be­gun to po­lice their net­works more care­fully and take down ob­vi­ous fakes, in­clud­ing those prop­a­gated by bot­nets.

But im­pos­ing lim­its on free speech, pro­tected by the First Amend­ment of the US Con­sti­tu­tion, raises dif­fi­cult prac­ti­cal prob­lems. While ma­chines and non-us ac­tors have no First Amend­ment rights (and pri­vate com­pa­nies are not bound by the First Amend­ment in any case), ab­hor­rent do­mes­tic groups and in­di­vid­u­als do, and they can serve as in­ter­me­di­aries for for­eign in­flu­encers.

In any case, the da­m­age done by for­eign ac­tors may be less than the da­m­age we do to our­selves. The prob­lem of fake news and for­eign im­per­son­ation of real news sources is dif­fi­cult to re­solve be­cause it in­volves trade-offs among our im­por­tant val­ues. The so­cial me­dia com­pa­nies, wary of com­ing un­der at­tack for cen­sor­ship, want to avoid reg­u­la­tion by leg­is­la­tors who crit­i­cise them for both sins of omis­sion and com­mis­sion.

Ex­pe­ri­ence from Eu­ro­pean elec­tions sug­gests that in­ves­tiga­tive jour­nal­ism and alert­ing the pub­lic in ad­vance can help in­oc­u­late vot­ers against dis­in­for­ma­tion cam­paigns. But the bat­tle with fake news is likely to re­main a cat-and­mouse game be­tween its pur­vey­ors and the com­pa­nies whose plat­forms they ex­ploit. It will be­come part of the back­ground noise of elec­tions ev­ery­where. Con­stant vig­i­lance will be the price of pro­tect­ing our democ­ra­cies.

Newspapers in English

Newspapers from Kenya

© PressReader. All rights reserved.