The Sunday Telegraph

New rules for using AI would prevent officials blaming robots for errors

- By Edward Malnick

RULES governing the behaviour of public officials could be overhauled by a former spy chief over concerns that the advance of artificial intelligen­ce (AI) leaves police and civil servants able to blame robots for their mistakes.

Lord Evans of Weardale, the former director general of MI5, is leading a “scoping” exercise examining how the 25-year-old Nolan Principles, which set out the ethical standards required of civil servants, could be updated to cover decisions that are made with the help of AI.

Lord Evans, who was recently appointed chairman of the Committee on Standards in Public Life, is understood to be concerned that an increasing reliance on technology could effectivel­y shift responsibi­lity away from police officers and civil servants. The existing seven Nolan Principles simply state that anyone in public office must be “accountabl­e to the public for their decisions and actions”.

In September, a report by the Royal United Services Institute (RUSI) think tank warned of a lack of “clear guidance and codes of practice” governing the growing use of algorithms, mathe- matical formulas used by computers, to decide whether a suspect should be kept in custody.

A spokesman for the Committee on Standards in Public Life confirmed: “We are at the early scoping stage of a project to look at how the Nolan principles can be applied where AI is used to make decisions about, or to help in the delivery of, public services.

“The principles, which are 25 years old next year, are regularly tested and reflect the expectatio­ns that the public have of those that serve them.

“We want to look at how these longstandi­ng principles can continue to be ‘built in’ to future service delivery.

“These are huge and fast-moving developmen­ts and we’re keen to hear from organisati­ons working in this field or developing thinking in this area as we plan this piece of work.”

The RUSI report highlighte­d how Durham Constabula­ry was using a “machine learning” algorithm to assess the risk of individual­s offending over the following two years, in order to help decide whether they should be released from custody.

The system bases its prediction on 34 pieces of data, including age, gender and postcode. A resulting score is then one of a number of factors taken into account by an officer in making an overall risk assessment.

AI is also used by police to recognise faces using automated software, including at Notting Hill Carnival.

In a submission to a Parliament­ary inquiry into the Implicatio­ns of Artificial Intelligen­ce in 2017, Thames Valley police said that in the future “AI could perform many of the process-driven tasks that take place in the police”.

It could be used to assist “investigat­ions by ‘joining the dots’ in databases, risk assessment of offenders, forensic analysis of devices, transcribi­ng and analysis of CCTV, security checks and automation of administra­tive tasks”.

But the force said there was a risk of “bias” in the AI software and a concern that the technology “might be unable to reason with a human”.

‘We are at the early stage of a project to look at how the Nolan principles can be applied where AI is used’

Newspapers in English

Newspapers from United Kingdom