John Naughton The me­dia are sell­ing us an AI fan­tasy

The Observer - The New Review - - Agenda - John Naughton

Ar­ti­fi­cial in­tel­li­gence (AI) is a term that is now widely used (and abused), loosely de­fined and mostly mis­un­der­stood. Much the same might be said of, say, quan­tum physics. But there is one im­por­tant dif­fer­ence, for whereas quan­tum phe­nom­ena are not likely to have much of a di­rect im­pact on the lives of most peo­ple, one par­tic­u­lar man­i­fes­ta­tion of AI – ma­chine­learn­ing – is al­ready hav­ing a mea­sur­able im­pact on most of us.

The tech gi­ants that own and con­trol the tech­nol­ogy have plans to ex­po­nen­tially in­crease that im­pact and to that end have crafted a distinc­tive nar­ra­tive. Crudely sum­marised, it goes like this: “While there may be odd glitches and the oc­ca­sional re­gret­table down­side on the way to a glo­ri­ous fu­ture, on bal­ance AI will be good for hu­man­ity. Oh – and by the way – its progress is un­stop­pable, so don’t worry your silly lit­tle heads fret­ting about it be­cause we take ethics very se­ri­ously.”

Crit­i­cal anal­y­sis of this nar­ra­tive sug­gests that the for­mula for cre­at­ing it in­volves mix­ing one part fact with three parts self-serv­ing cor­po­rate cant and one part tech­fan­tasy emit­ted by geeks who reg­u­larly smoke their own ex­haust. The truly ex­tra­or­di­nary thing, there­fore, is how many ap­par­ently sane peo­ple seem to take the nar­ra­tive as a cred­i­ble ver­sion of hu­man­ity’s fu­ture.

Chief among them is our own dear prime min­is­ter, who in re­cent speeches has iden­ti­fied AI as a ma­jor growth area for both Bri­tish in­dus­try and health­care. But she is by no means the only politi­cian to have drunk that par­tic­u­lar Kool-Aid.

Why do peo­ple be­lieve so much non­sense about AI? The ob­vi­ous an­swer is that they are in­flu­enced by what they see, hear and read in main­stream me­dia. But un­til now that was just an anec­do­tal con­jec­ture. The good news is that we now have some em­pir­i­cal sup­port for it, in the shape of a re­mark­able in­ves­ti­ga­tion by the Reuters In­sti­tute for the Study of Jour­nal­ism at Ox­ford Uni­ver­sity into how UK me­dia cover ar­ti­fi­cial in­tel­li­gence.

The re­searchers con­ducted a sys­tem­atic ex­am­i­na­tion of 760 ar­ti­cles pub­lished in the first eight months of 2018 by six main­stream UK news out­lets, cho­sen to rep­re­sent a va­ri­ety of po­lit­i­cal lean­ings – the Tele­graph, Mail On­line (and the Daily Mail), the Guardian, Huf­fPost, the BBC and the UK edi­tion of Wired mag­a­zine. The main con­clu­sion of the study is that me­dia cov­er­age of AI is dom­i­nated by the in­dus­try it­self. Nearly 60% of ar­ti­cles were fo­cused on new prod­ucts, an­nounce­ments and ini­tia­tives sup­pos­edly in­volv­ing AI; a third were based on in­dus­try sources; and 12% ex­plic­itly men­tioned Elon Musk, the would-be colonist of Mars.

Crit­i­cally, AI prod­ucts were of­ten por­trayed as rel­e­vant and com­pe­tent so­lu­tions to a range of pub­lic prob­lems. Jour­nal­ists rarely ques­tioned whether AI was likely to be the best an­swer to these prob­lems, nor did they ac­knowl­edge de­bates about the tech­nol­ogy’s pub­lic ef­fects.

“By am­pli­fy­ing in­dus­try’s self­in­ter­ested claims about AI,” said one of the re­searchers, “me­dia cov­er­age presents AI as a solution to a range of prob­lems that will dis­rupt nearly all ar­eas of our lives, of­ten with­out ac­knowl­edg­ing on­go­ing de­bates con­cern­ing AI’s po­ten­tial ef­fects. In this way, cov­er­age also po­si­tions AI mostly as a pri­vate com­mer­cial con­cern and un­der­cuts the role and po­ten­tial of pub­lic ac­tion in ad­dress­ing this emerg­ing pub­lic is­sue.”

This re­search re­veals why so many peo­ple seem obliv­i­ous to, or com­pla­cent about, the chal­lenges that AI tech­nol­ogy poses to fun­da­men­tal rights and the rule of law. The tech in­dus­try nar­ra­tive is ex­plic­itly de­signed to make sure that so­ci­eties don’t twig this un­til it’s too late to do any­thing about it. (In the same way that it’s now too late to do any­thing about fake news.) The Ox­ford re­search sug­gests that the strat­egy is suc­ceed­ing and that main­stream jour­nal­ism is un­wit­tingly aid­ing and abet­ting it.

An­other plank in the in­dus­try’s strat­egy is to pre­tend that all the im­por­tant is­sues about AI are about ethics and ac­cord­ingly the com­pa­nies have banded to­gether to finance nu­mer­ous ini­tia­tives to study eth­i­cal is­sues in the hope of earn­ing brownie points from gullible politi­cians and po­ten­tial reg­u­la­tors. This is what is known in rugby cir­cles as “get­ting your re­tal­i­a­tion in first” and the re­sult is what can only be de­scribed as “ethics the­atre”, much like the se­cu­rity the­atre that goes on at air­ports.

No­body should be taken in by this kind of de­cep­tion. There are eth­i­cal is­sues in the de­vel­op­ment and de­ploy­ment of any tech­nol­ogy, but in the end it’s law, not ethics, that should de­cide what hap­pens, as Paul Nemitz, prin­ci­pal ad­viser to the Euro­pean com­mis­sion, points out in a ter­rific ar­ti­cle just pub­lished by the Royal So­ci­ety. Just as ar­chi­tects have to think about build­ing codes when de­sign­ing a house, he writes, tech com­pa­nies “will have to think from the out­set… about how their fu­ture pro­gram could af­fect democ­racy, fun­da­men­tal rights and the rule of law and how to en­sure that the pro­gram does not un­der­mine or dis­re­gard… these ba­sic tenets of con­sti­tu­tional democ­racy”.

Yep. So lets have no more “soft” cov­er­age of ar­ti­fi­cial in­tel­li­gence and some real, scep­ti­cal jour­nal­ism in­stead.

Erica the ro­bot be­ing ex­hib­ited in Madrid last year: are we be­ing told lies about AI? Gabriel Bouys/ AFP/Getty Images

Newspapers in English

Newspapers from UK

© PressReader. All rights reserved.