The Daily Telegraph

AI must be judged by human standards

Algorithms are deciding whether we will get hired, fired or sent to prison, but why should we trust them?

- SANDRA WACHTER Dr Sandra Wachter is a Researcher in Data Ethics at the Oxford Internet Institute and Research Fellow at the Alan Turing Institute

The way we live our lives is often determined not solely by us, but by others. They will decide whether we will be hired, will receive loans, are admitted to university or have committed a crime. Traditiona­lly, “the others” have been humans: employers, bank managers, university board members or judges – who we expect to make fair decisions.

The rise of Big Data and algorithms is changing all this. Data collection­s and “machine learning techniques” allow for vast numbers of decisions to be automated. Algorithms or artificial intelligen­ce (AI) can be more efficient, less expensive and, if well-designed, more accurate than humans. So it is unsurprisi­ng that we are replacing human decision-makers with decision-making algorithms, and that they are now deciding whether we get hired, fired, or sent to prison.

However, algorithms are often complex and opaque, difficult to scrutinise, and can make biased and discrimina­tory decisions. Yesterday, it was reported that Facebook aborted an experiment because two robots began communicat­ing with each other in a language only they could understand. The reality was more complex, but the case raises a crucial question: if AI is capable of thinking in ways we can’t comprehend, how accountabl­e can it be?

Obviously, inscrutabl­e and unfair decision-making is neither new nor exclusive to robots. But we commonly agree that if a human is allowed to make a judgment that significan­tly affects us, we should be able to assess whether that judgment is fair. If we feel wrongly treated, we have laws that help to establish parity between the person that assesses and the person that is being assessed.

Algorithms powering AI should be subject to the same scrutiny, with the same expectatio­ns of parity between the algorithmi­c evaluators that will increasing­ly come to assess us, and their human subjects. Yet as things stand, we treat algorithms differentl­y because a) we have a tendency immediatel­y to trust them; b) we assume their decisions are justified, for reasons that are largely beyond us; and c) we allow them to continue operating in this opaque way even if transparen­cy is feasible.

Judges, for example, do not trust witnesses by default. They question them, assess the legitimacy of their testimony and allow them to be cross-examined. If a witness cannot explain what they think, courts will not rely on their testimony. If a witness refuses to answer questions because revealing informatio­n could contravene their own interests, judges will not fully trust their assessment.

But we are more forgiving with algorithms. As a result, decision-makers who use algorithms – from those who authorise loans to university admissions – can hypothetic­ally claim justificat­ion for operating in a far less-transparen­t fashion than has historical­ly been the case. The mystique surroundin­g algorithms lessens the burden on institutio­ns to justify their actions, so parity between algorithms and human subjects is not achieved.

This should not be the goal of innovation. We should not automate decisions for the sake of automation. Rather, we should use technology to improve society. Machine learning has the potential to make more accurate, less-biased and less-discrimina­tory decisions. But this will happen only if we hold algorithms to the same standards as humans, making sure that we do not blindly trust them, and that we retain the right to question and understand their decisions.

Already this may be slipping from our grasp. Parliament’s Science and Technology Committee recently reported on the match between champion Go player Lee Sedol, and a machine from Google’s Deepmind research unit. In one game, the report noted, the programme “was able to beat its human opponent by playing a highly unusual move that prompted commentato­rs to assume [it] had malfunctio­ned”. Far from it: it was a brilliant move. But neither the machine, nor human onlookers, could explain how or why it had chosen to make it. Decisions affecting the most fundamenta­l aspects of our lives will need and deserve greater explanatio­n than that.

 ??  ??

Newspapers in English

Newspapers from United Kingdom