Police using AI tech with ‘troubling’ secrecy
Plans drawn up by former MI5 chief would force authorities to be more transparent
POLICE and government agencies would have to publicly declare when they are using artificial intelligence to fight crime, under proposals to safeguard the civil liberties of members of the public.
Lord Evans of Weardale, a former head of MI5 who is overseeing an official review of the use of AI in the public sector, said it was “troubling” that little was known about the increased use of AI by the authorities, which are deploying automated software to recognise faces and help decide whether suspects should be bailed.
Last week it emerged that the emergency services could in the future remotely deploy drones to monitor accidents or crime scenes.
Responding to fears that the use of AI could infringe the civil liberties of members of the public, Lord Evans said it was currently “very difficult” to find out where automated systems were being used by authorities.
In an interview with The Sunday Telegraph, Lord Evans, who chairs Whitehall’s committee on standards in public life, said the use of AI by public bodies should be “visible and declared” where it could infringe on civil liberties.
The crossbench peer, who was director general of MI5 until 2013, also called for the introduction of a “clear set of guidance” on the use of AI.
He warned that current automated systems, which help authorities to make decisions affecting members of the public, can contain inadvertent prejudices against particular groups.
Lord Evans’s committee has been carrying out a review of the issue since
March, and is due to submit its final report to Boris Johnson in February.
In a letter to Lord Evans dated Oct 7, the Prime Minister said he “completely agreed” that “we need to ensure standards are upheld as AI technology is increasingly used and procured across the public sector”. He said he looked forward to reading the findings.
Speaking as the committee finalised the review, Lord Evans said AI had the potential to spark “better informed human judgments” across the public sector. “If you get it right, and it’s done intelligently, there is actually potential benefit, because decision-making at the moment is probably not wholly rational and objective,” he said.
“If you get the systems in place, you might actually be in a better position to demonstrate why what’s being done is [more] fair, objective and so on, than it is with just humans working on their own.”
But he added: “It was very difficult to find out where AI is being used in the public sector. And it shouldn’t, in our view, be as difficult as that … We haven’t got a very good view as to where this is being used and how. I think that’s a little bit troubling.”
Lord Evans continued: “I think there should be some arrangement whereby any public authority or agency which is using this ought to be proactively open about that.
“Not because there’s a problem, because probably 99 per cent of the time there won’t be a problem.
“But if you are wanting proper scrutiny and accountability for it, then people have got to know where it is being used. At the very minimum, it should be visible, and declared, where it has
the potential for impacting on civil liberties and human rights and freedoms.”
The inquiry was started after a report by the Royal United Services Institute think tank warned last year of a lack of “clear guidance and codes of practice”. The RUSI report highlighted how Durham Constabulary was using an algorithm to assess the risk of individuals offending in order to help decide whether they should be released from custody.
In 2016 an investigation found that black defendants were almost twice as likely to be deemed at risk of offending than white defendants, under a system then in place in the US.
AI is also used by police to recognise faces, including at London’s Notting
Hill Carnival. Last week it emerged that the Civil Aviation Authority is to oversee a trial to test the flying of highly automated drones beyond an operator’s line of sight, which is currently banned. The move could pave the way for emergency services to remotely deploy drones to monitor accidents or ongoing crimes.
Lord Evans said: “If you’re using it in the criminal justice system … you have got be confident that you have not inadvertently built prejudice in there.”
‘You have got be confident that you have not inadvertently built prejudice in there’
The peer said the committee’s discussions had shown that a field such as medicine appeared to be in a better position than police forces to deploy AI technology with safeguards, given the processes already in place to test and assess new drugs and treatments.
He said: “They have got an established route for making sure that it works ethically … and that sort of professional process for innovation isn’t endemic across policing.
“Individual police forces make their own judgments. I’m sure all of them are trying to do it in a sensible way, but there isn’t that same discipline.”
He added: “I do think we need to have a clear set of guidance … and applied through codes or whatever for each individual area of public work.”