Algorithms, for Better and Worse
Although decisions made by machines might seem “fairer” than those made by humans, should they be allowed to make them?
IN AUGUST 2020 we saw the first, but not last, protest marches over an algorithm. Due to the Covid pandemic, in-person exams were canceled in the U.K., and instead an algorithm was used to allocate grades based on the predictions of teachers. Students took to the streets of London to protest. The government quickly capitulated and reverted to the grades predicted by teachers. Former British Prime Minister Boris Johnson blamed the fiasco on a “mutant algorithm.” But there was nothing mutant about the algorithm. As far as we can tell, it did exactly what it was meant to do. The problem was that administrators hadn’t thought carefully enough about what the public would find “fair.”
This isn’t just a problem restricted to education. It’s something CEOs everywhere must tackle, as algorithms are used to make decisions across their businesses. Who should get a loan? How much should a person’s insurance cost? Who gets the best (and worst) shifts?
The algorithm in the U.K. exposed two fundamental problems. First, it highlighted how important it is to retain human agency, especially in high-stakes decisions. Having an algorithm give students a predicted grade, however accurate the prediction might have been, denied them this agency.
Second, it exposed and magnified a fundamental problem with the public examination system that existed even when humans were doing the grading. Ranking students nationwide on a simple scale, when this would decide life-changing events like university admission, is inherently problematic. Should your life options be decided by your performance on just one particular exam? That seems no better than the whims of an algorithm. The truth is that algorithms cannot fix broken systems.
One of the promises of algorithms is that they can make decision-making fairer. Humans are terrible at making unbiased decisions. We like to think that we can make fair decisions, but psychologists have identified a large catalog of cognitive biases. Algorithms offer the promise of defeating these, of making perfectly rational, fair and evidence-based decisions. Indeed, they even offer the promise of making decisions in settings either where humans are incompetent, such as decisions that require calculating precise probabilities, or where humans are incapable, such as decisions based on data sets of a scale beyond human comprehension.
I remain optimistic about our algorithmic future. We can expect more and more decisions to be handed over to algorithms. If carefully designed, these algorithms will be as fair as, if not fairer than, humans at these tasks. Equally, there will be many settings in which algorithms will be used to make decisions that humans couldn’t process or make fairly.
But some decisions we may try to hand over to machines will be high-stakes.
I remain optimistic about our future. We can expect more and more decisions to be handed to algorithms.
What grades do we give to students who couldn’t complete their exams because of the pandemic? To which prisoners do we give parole? Which taxpayers do we audit? Who should be short-listed for a job interview?
Even if machines can make high-stakes decisions like these “better” than humans, we might choose not to give all such decisions to machines. We might continue to have humans make certain high-stakes decisions, despite all the imperfections inherent in human calculations. It might be better to have human empathy and accountability despite human fallibility. This might be preferable to the logic of cold but slightly less fallible machines.
In our courtrooms, perhaps human judges should always decide whether a person is jailed. Machines could perhaps make society a safer place, only locking up those who truly are a risk to society. But doing so would transform the world in which we live, turning it into one of the bad dreams envisaged by writers such as Orwell and Huxley.
Toby Walsh is a professor of artificial intelligence and Fellow of the Australian Academy of Science. This is an extract from his book Machines Behaving Badly:
The Morality of AI, available in the United States in October.