Po­lice us­ing AI tech with ‘trou­bling’ se­crecy

Plans drawn up by for­mer MI5 chief would force au­thor­i­ties to be more trans­par­ent

The Sunday Telegraph - - Front page - By Ed­ward Mal­nick SUN­DAY PO­LIT­I­CAL ED­I­TOR

PO­LICE and gov­ern­ment agen­cies would have to pub­licly de­clare when they are us­ing ar­ti­fi­cial in­tel­li­gence to fight crime, un­der pro­pos­als to safe­guard the civil lib­er­ties of mem­bers of the public.

Lord Evans of Weardale, a for­mer head of MI5 who is over­see­ing an of­fi­cial re­view of the use of AI in the public sec­tor, said it was “trou­bling” that lit­tle was known about the in­creased use of AI by the au­thor­i­ties, which are de­ploy­ing au­to­mated soft­ware to recog­nise faces and help de­cide whether sus­pects should be bailed.

Last week it emerged that the emer­gency ser­vices could in the fu­ture re­motely de­ploy drones to mon­i­tor ac­ci­dents or crime scenes.

Re­spond­ing to fears that the use of AI could in­fringe the civil lib­er­ties of mem­bers of the public, Lord Evans said it was cur­rently “very dif­fi­cult” to find out where au­to­mated sys­tems were be­ing used by au­thor­i­ties.

In an in­ter­view with The Sun­day Tele­graph, Lord Evans, who chairs White­hall’s com­mit­tee on stan­dards in public life, said the use of AI by public bod­ies should be “vis­i­ble and de­clared” where it could in­fringe on civil lib­er­ties.

The cross­bench peer, who was di­rec­tor gen­eral of MI5 un­til 2013, also called for the in­tro­duc­tion of a “clear set of guid­ance” on the use of AI.

He warned that cur­rent au­to­mated sys­tems, which help au­thor­i­ties to make de­ci­sions af­fect­ing mem­bers of the public, can con­tain in­ad­ver­tent prej­u­dices against par­tic­u­lar groups.

Lord Evans’s com­mit­tee has been car­ry­ing out a re­view of the is­sue since

March, and is due to sub­mit its fi­nal re­port to Boris John­son in Fe­bru­ary.

In a let­ter to Lord Evans dated Oct 7, the Prime Min­is­ter said he “com­pletely agreed” that “we need to en­sure stan­dards are up­held as AI tech­nol­ogy is in­creas­ingly used and pro­cured across the public sec­tor”. He said he looked for­ward to read­ing the find­ings.

Speak­ing as the com­mit­tee fi­nalised the re­view, Lord Evans said AI had the po­ten­tial to spark “bet­ter in­formed hu­man judg­ments” across the public sec­tor. “If you get it right, and it’s done in­tel­li­gently, there is ac­tu­ally po­ten­tial ben­e­fit, be­cause de­ci­sion-mak­ing at the mo­ment is prob­a­bly not wholly ra­tio­nal and ob­jec­tive,” he said.

“If you get the sys­tems in place, you might ac­tu­ally be in a bet­ter po­si­tion to demon­strate why what’s be­ing done is [more] fair, ob­jec­tive and so on, than it is with just hu­mans work­ing on their own.”

But he added: “It was very dif­fi­cult to find out where AI is be­ing used in the public sec­tor. And it shouldn’t, in our view, be as dif­fi­cult as that … We haven’t got a very good view as to where this is be­ing used and how. I think that’s a lit­tle bit trou­bling.”

Lord Evans con­tin­ued: “I think there should be some ar­range­ment whereby any public au­thor­ity or agency which is us­ing this ought to be proac­tively open about that.

“Not be­cause there’s a prob­lem, be­cause prob­a­bly 99 per cent of the time there won’t be a prob­lem.

“But if you are want­ing proper scru­tiny and ac­count­abil­ity for it, then peo­ple have got to know where it is be­ing used. At the very min­i­mum, it should be vis­i­ble, and de­clared, where it has

the po­ten­tial for im­pact­ing on civil lib­er­ties and hu­man rights and free­doms.”

The in­quiry was started af­ter a re­port by the Royal United Ser­vices In­sti­tute think tank warned last year of a lack of “clear guid­ance and codes of prac­tice”. The RUSI re­port high­lighted how Durham Con­stab­u­lary was us­ing an al­go­rithm to as­sess the risk of in­di­vid­u­als of­fend­ing in or­der to help de­cide whether they should be re­leased from cus­tody.

In 2016 an in­ves­ti­ga­tion found that black de­fen­dants were al­most twice as likely to be deemed at risk of of­fend­ing than white de­fen­dants, un­der a sys­tem then in place in the US.

AI is also used by po­lice to recog­nise faces, in­clud­ing at Lon­don’s Not­ting

Hill Car­ni­val. Last week it emerged that the Civil Avi­a­tion Au­thor­ity is to over­see a trial to test the fly­ing of highly au­to­mated drones be­yond an oper­a­tor’s line of sight, which is cur­rently banned. The move could pave the way for emer­gency ser­vices to re­motely de­ploy drones to mon­i­tor ac­ci­dents or on­go­ing crimes.

Lord Evans said: “If you’re us­ing it in the crim­i­nal jus­tice sys­tem … you have got be con­fi­dent that you have not in­ad­ver­tently built prej­u­dice in there.”

‘You have got be con­fi­dent that you have not in­ad­ver­tently built prej­u­dice in there’

The peer said the com­mit­tee’s dis­cus­sions had shown that a field such as medicine ap­peared to be in a bet­ter po­si­tion than po­lice forces to de­ploy AI tech­nol­ogy with safe­guards, given the pro­cesses al­ready in place to test and as­sess new drugs and treat­ments.

He said: “They have got an es­tab­lished route for mak­ing sure that it works eth­i­cally … and that sort of pro­fes­sional process for in­no­va­tion isn’t en­demic across polic­ing.

“In­di­vid­ual po­lice forces make their own judg­ments. I’m sure all of them are try­ing to do it in a sen­si­ble way, but there isn’t that same dis­ci­pline.”

He added: “I do think we need to have a clear set of guid­ance … and ap­plied through codes or what­ever for each in­di­vid­ual area of public work.”

Newspapers in English

Newspapers from UK

© PressReader. All rights reserved.