An algorithm isn’t needed to figure out this farrago
The A-level scandal has brought issues of computer bias into the mainstream. We need an AI tsar to fix this socially toxic mess
Roger Taylor scuffed his A-levels, but at least he admits it was his own fault. “I did rather badly for various reasons,” Ofqual’s embattled chairman admitted to an interviewer last year. “Lack of application. A sense of ‘What’s the point of it all?’ And also a sort of indecision about what I wanted to do.”
Luckily for Mr Taylor, those disappointing exam results weren’t too much of a setback. He still managed to secure for himself a place at Oxford University to study PPE.
But for the hundreds of thousands of British schoolchildren whose A-level grades were downgraded by Ofqual’s algorithms last week, that is not a feeling they are likely to share.
Instead, Taylor seems to have clumsily stumbled into the midst of Britain’s biggest scandal yet over algorithmic bias – an issue that has exploded into the mainstream. Ministers may have now opted for an about turn but either way this was a debate that was urgently required.
An obscure article of EU data protection law that covers decisions based on algorithms says individuals “shall have the right not to be subject to a decision based on automated processing including profiling” if it has a legal effect on them. Up until now, there has been relatively little scrutiny of the algorithms that increasingly govern our lives.
From credit checks and insurance premiums to the algorithms used by fast food chains such as McDonald’s to sell us more hamburgers, the quiet but steady creep of AI into every facet of our everyday existence has generated a new set of questions over ethics, bias and fairness.
But while some poor AI decision-making is unlikely to cause too much upset, in others it is socially toxic.
So it is with the scandal over algorithmic A-level marking, which at a stroke has catapulted this formerly fringe, nerdy issue into millions of British living rooms.
Algorithms – complex equations used by computers to make decisions – may be fiendishly complex but the problem with relying on them too much to make decisions like this is startlingly simple.
The judgments they reach are only as good as the data that they are based upon – which is often either incomplete, incorrect or subject to the same human bias that pervades society at large.
In this case, relying on the use of postcode data and historic school exam grades rather than the judgments of real-life human teachers who know individual pupils, seems like an easy way to exacerbate and embed existing socio-economic bias, to favour weak candidates at better and smaller schools while discriminating against strong pupils from others that have historically underperformed.
To be fair, it’s true that Taylor and his team at Ofqual faced a tough task being forced to rapidly craft an effective exam-marking system during the pandemic, after the cancellation of tests.
But given the huge importance of these results to the future prospects of millions of young people, how rigorous was the effort to ensure the system was fair? What sort of external oversight existed to monitor the process? And above all, who was regulating the algorithms that were being applied?
Perhaps this is the time to ask whether, instead of trying to regulate the big tech companies, it may be more useful to regulate the algorithms being applied to decision-making? In other words – and for want of a better phrase – do we need an “algorithm tsar” to help prevent this kind of thing happening again?
The person or agency would be responsible for ensuring algorithms were fair and free of bias, laying down ground-rules and would be empowered to approve or reject poorly-designed schemes before they were used.
After all, this is a highly specialist field of knowledge. Within the Government, only limited expertise exists, often forcing civil servants to rely on third party providers who may have little vested interest in the consequences of their work.
It’s no secret that there is a chronic shortage of skilled professionals in AI. Tencent, the Chinese tech giant, says there are only 300,000 skilled AI engineers globally where millions are needed.
That means salaries for skilled workers have surged in recent years, often placing them outside the pool of people government departments can realistically seek to recruit.
Taylor really should have known better than to preside over this fiasco.
As well as being the chairman of Ofqual, he is also the chairman of the Centre for Ethics and Data Innovation, an independent advisory body to the UK government addressing the benefits and risks of AI. If any organisation was qualified to kick the tyres of Ofqual’s algorithms, this was it. Yet so far, perhaps unsurprisingly, the CDEI has been silent on the exam shambles.
In fact, Taylor himself was strangely prescient about the kind of threat posed by algorithmic decisions of this kind.
In a CDEI report last year he wrote: “Artificial intelligence and algorithmic systems can now operate vehicles, decide on loan applications and screen candidates for jobs. The technology has the potential to improve lives and benefit society, but it also brings ethical challenges which need to be carefully navigated if we are to make full use of it.”
‘At a stroke the fiasco has catapulted this nerdy issue into millions of homes’