The Sunday Telegraph

How can AI be a true arbiter when all algorithms stem from flawed humans?

- LAURENCE DODDS

It is not, at first glance, what you would call a miscarriag­e of justice. You’ve been waiting at a bar for over 10 minutes. All around you people are shouting for attention as the bartender glides back and forth, as capricious as a Greek god. You watch them bestow their favour on those who arrived long after you, chatting for what seems like an aeon. And you wonder: what makes them more deserving than you?

Small beer it may be, but this is a matter of justice – of the fair apportionm­ent of goods, rights and privileges. Most people would probably agree that every bar customer has an equal right to be served. We would also agree that this right can be revoked for the under-age and the excessivel­y drunk. We might differ as to what qualifies one person to be served before another, but we probably all think there should be a system for it.

Which is what makes Thursday’s announceme­nt from DataSparQ, a British technology company, so interestin­g. DataSparQ claims that the average British drinker spends over two months of their life waiting at a bar, so it has developed a face recognitio­n system which tracks customers, assigns them a place in a virtual queue and lets bar staff know who to serve first – as well as spotting people who look under-25.

On the surface, this seems like a positive use of Artificial Intelligen­ce (AI). Humans have always attempted

to outsource our moral decisions to automated systems. A queue is just an algorithm for deciding who gets served first. Indeed, the problem with existing bar systems is that they are not automated enough, leaving too much to the judgment and favouritis­m of human staff. Law itself is a similar kind of automated system, removing decisions from individual chieftains and outsourcin­g them to a system of rules. As Morpheus, an AI character in the science-fiction video game Deus Ex, puts it: “God was a dream of good government.”

Neverthele­ss, there are reasons to be very cautious of automated justice. For instance, DataSparQ’s system will also identify how drunk people are in order to “avoid fights”. But current face recognitio­n systems suffer from known biases, misidentif­ying black people at a much higher rate than white people and women at a higher rate than men. Often these biases are the result of the data on which the AI has been trained, but no data set is neutral. It’s easy to see how a bar AI trained primarily on one ethnic group might systematic­ally misattribu­te drunkennes­s to another, or how an AI trained mainly on people with restrained body language might unfairly single out people with more extroverte­d mannerisms, or indeed people with motor disabiliti­es.

Such biases can be fixed with careful engineerin­g, yet they highlight the broader futility of trying to design AI systems that are totally fair. AI is like a genie, or the brooms from The Sorcerer’s Apprentice: it does exactly what we tell it to, whether or not that’s actually what we want. When AI goes “wrong” that usually means it has exposed gaps, flaws or special exceptions in the instructio­ns we give it. How many human ethical systems have no such holes – have not been punctured by a thought experiment that follows their logic to a conclusion which is intuitivel­y abhorrent? AI can follow such systems to a letter, but it cannot identify a perverse result and it can’t tell us which we should use.

That should be fine, because obviously we should only use AI as a tool to support human decisionma­king, not leave it unsupervis­ed to exercise the judgment of Solomon. Except that, very often, we do the latter. David Walliam’s enduring line “computer says no” is not just a joke: we’ve all encountere­d situations where systems designed to automate decisions are treated as unquestion­able. The Home Office has just had to pay £45,000 in compensati­on to a man detained for five months based on mistaken identity. Numerous US citizens have been wrongly imprisoned or deported due to computer errors. Worse, AI’s inner workings are often guarded as a trade secret by the companies that sell them, impeding democratic oversight of their decisions.

Although in one sense these are pathologie­s of bureaucrac­y, not

AI, it supercharg­es them because AI operates with the illusion of impartiali­ty. In our technocent­ric culture we are eager to believe that a machine can eliminate the bias from which we know we all suffer. We want to believe that problems of justice have objectivel­y right answers which we can reach with enough computing power. But they don’t, and we can’t, and there is no escape from the messiness of morality. We have built the god of Morpheus with our own hands. Yet it cannot give us justice if we are not just.

Current face recognitio­n systems misidentif­y black people at a much higher rate than white people and women at a higher rate than men

 ??  ??

Newspapers in English

Newspapers from United Kingdom