AI must be judged by human standards
Algorithms are deciding whether we will get hired, fired or sent to prison, but why should we trust them?
The way we live our lives is often determined not solely by us, but by others. They will decide whether we will be hired, will receive loans, are admitted to university or have committed a crime. Traditionally, “the others” have been humans: employers, bank managers, university board members or judges – who we expect to make fair decisions.
The rise of Big Data and algorithms is changing all this. Data collections and “machine learning techniques” allow for vast numbers of decisions to be automated. Algorithms or artificial intelligence (AI) can be more efficient, less expensive and, if well-designed, more accurate than humans. So it is unsurprising that we are replacing human decision-makers with decision-making algorithms, and that they are now deciding whether we get hired, fired, or sent to prison.
However, algorithms are often complex and opaque, difficult to scrutinise, and can make biased and discriminatory decisions. Yesterday, it was reported that Facebook aborted an experiment because two robots began communicating with each other in a language only they could understand. The reality was more complex, but the case raises a crucial question: if AI is capable of thinking in ways we can’t comprehend, how accountable can it be?
Obviously, inscrutable and unfair decision-making is neither new nor exclusive to robots. But we commonly agree that if a human is allowed to make a judgment that significantly affects us, we should be able to assess whether that judgment is fair. If we feel wrongly treated, we have laws that help to establish parity between the person that assesses and the person that is being assessed.
Algorithms powering AI should be subject to the same scrutiny, with the same expectations of parity between the algorithmic evaluators that will increasingly come to assess us, and their human subjects. Yet as things stand, we treat algorithms differently because a) we have a tendency immediately to trust them; b) we assume their decisions are justified, for reasons that are largely beyond us; and c) we allow them to continue operating in this opaque way even if transparency is feasible.
Judges, for example, do not trust witnesses by default. They question them, assess the legitimacy of their testimony and allow them to be cross-examined. If a witness cannot explain what they think, courts will not rely on their testimony. If a witness refuses to answer questions because revealing information could contravene their own interests, judges will not fully trust their assessment.
But we are more forgiving with algorithms. As a result, decision-makers who use algorithms – from those who authorise loans to university admissions – can hypothetically claim justification for operating in a far less-transparent fashion than has historically been the case. The mystique surrounding algorithms lessens the burden on institutions to justify their actions, so parity between algorithms and human subjects is not achieved.
This should not be the goal of innovation. We should not automate decisions for the sake of automation. Rather, we should use technology to improve society. Machine learning has the potential to make more accurate, less-biased and less-discriminatory decisions. But this will happen only if we hold algorithms to the same standards as humans, making sure that we do not blindly trust them, and that we retain the right to question and understand their decisions.
Already this may be slipping from our grasp. Parliament’s Science and Technology Committee recently reported on the match between champion Go player Lee Sedol, and a machine from Google’s Deepmind research unit. In one game, the report noted, the programme “was able to beat its human opponent by playing a highly unusual move that prompted commentators to assume [it] had malfunctioned”. Far from it: it was a brilliant move. But neither the machine, nor human onlookers, could explain how or why it had chosen to make it. Decisions affecting the most fundamental aspects of our lives will need and deserve greater explanation than that.