The Daily Telegraph

Humans can’t escape accountabi­lity for AI

Algorithms are in theory useful tools, but their ‘black box’ decisions shouldn’t be accepted without question

- Adrian weller Adrian Weller is a member of the independen­t board for the Centre for Data Ethics and Innovation, and programme director for AI at The Alan Turing Institute

The way organisati­ons make decisions is changing. An explosion in the volume of data, coupled with the growing sophistica­tion and accessibil­ity of algorithms, means that organisati­ons have increasing opportunit­ies to use machine learning and artificial intelligen­ce to support decisionma­king.

This is a good thing, in theory. Take recruitmen­t. It has never been physically easier to apply for a job – a few clicks of a button online, and you’re in the running. As a result, there has been a surge in the number of employment applicatio­ns. However, this hasn’t been great for ensuring applicants are treated consistent­ly.

Tired eyes skim-reading hundreds of CVS was never going to be a recipe for fair decision-making: if you happen to be near the bottom of the pile, good luck. But AI is beginning to make things easier. More importantl­y, the technology could also make things fairer – there is enormous potential for data-driven tools to help standardis­e processes and address areas of discretion (or indiscreti­on) where human biases can creep in.

And yet we have seen many examples of algorithms amplifying historic biases, or creating them anew. In the US, for example, an algorithm used to predict the likelihood of a criminal going on to reoffend was shown to have a bias against black defendants: white defendants were more likely to be incorrectl­y judged as low risk, and black defendants more likely to be incorrectl­y judged by the algorithm as high risk.

We can, and must, do better. We have to be clearer on accountabi­lity: the buck stops at the door of leaders. Organisati­ons and individual­s – be they in the public or private sector – need to be clear that they retain accountabi­lity for decisions made by both their human teams and their algorithms. Until we have this, leaders in organisati­ons are less likely to push for answers about whether an algorithm is working properly, and how it reached its conclusion­s. Simply accepting a “black box” without justificat­ion won’t cut it. This is particular­ly crucial in the public sector: citizens can’t opt out of local government or interactio­ns with the police, and decisions made in these sectors have real-life, acutely felt impacts.

This requires good, anticipato­ry governance. Many of the high-profile cases of algorithmi­c bias could have been anticipate­d with careful evaluation and mitigation of the potential risks. Organisati­ons, and more specifical­ly their leaders, need to ensure that the right capabiliti­es and structures are in place to make certain this happens both before algorithms are introduced into decision-making processes, and throughout those processes. Doing this well requires listening, to get an understand­ing of the expectatio­ns and concerns of the people who will be using an algorithm, and those who will be affected by it.

Leaders must strive to get this right now. There are steps for regulators, government and industry to take – but if we collective­ly drag our heels, public confidence will dwindle.

Part of the challenge is in skills: we need to build an ecosystem of skilled profession­als to help organisati­ons get fairness right and provide AI assurance. The Government has asked the Centre for Data Ethics and Innovation, which publishes its report today on this issue, to focus on AI assurance as a priority. We are bringing together a diverse range of organisati­ons to begin the work needed to accelerate the developmen­t of an effective AI accountabi­lity system in the UK. Think vehicle safety, but for AI: we ensure the cars on our roads are safe to drive, and new cars are becoming safer year after year – so how will we do the same for AI? This is no small task, but it is achievable.

Enabling data and AI technologi­es to drive better, fairer, more trustworth­y decision-making is a challenge that countries face around the world. By taking a lead in this area, and working together with our internatio­nal partners, the UK can help to address bias and inequaliti­es, not only within our own borders, but also across the globe.

 ??  ??

Newspapers in English

Newspapers from United Kingdom