Calls for algorithm regulator
Plans to govern by algorithm raise red flags in wake of A-level chaos
Critics argue that injustices highlighted by the A-level results furore will only increase without an official regulator.
Critics have warned that algorithmic injustices highlighted by the A-level results furore will only increase without an official regulator to monitor their usage.
Officials in the UK are increasingly using computer models to automate decisions based on the government’s datasets, but as the recent A-level fiasco has shown, such practices have huge implications for individuals.
“We’ve talked about algorithm regulation in academia and now there’s going to pressure for a regulatory body in charge of this,” said Paul Bernal, professor in law at the University of East Anglia. “The thing that made a difference with the A-levels is we saw a direct impact – people were going to lose university places because of this algorithm.”
While the A-level scandal, which initially saw thousands of students’ grades downgraded, was the first major event to bring the issue to the fore, Bernal claims it’s been a simmering issue that was always going to boil over. “Algorithms are being made to make decisions that they shouldn’t be used for and the natural consequence of that is that individuals are punished through no fault of their own,” he said. “The same things will have happened to people already – where algorithms have been used in credit ratings, job applications and personalised pricing – but it’s less visible.”
There’s immediate potential for similar scandals too, with the government already planning to use algorithms to allocate and adjudicate on benefit decisions, as well as in the medical arena. “They will look at AI to decide who’s going to get the Covid-19 vaccine first and decisions about access to medicine will be made through algorithms and health data,” said Bernal. “And they’re building up systems to assess people’s benefit entitlement algorithmically, which is a recipe for disaster.”
With the government continuously ramping up its “digital by default” ambitions, global watchdogs are also
The methods should be made public to all concerned and made available to people with stakes in this
cautioning over damage. “We are witnessing the gradual disappearance of the post-war British welfare state behind a webpage and an algorithm,” said special rapporteur Philip Alston in a report for the United Nations on poverty and human rights in late 2018. “The impact on the human rights of the most vulnerable in the UK will be immense.”
It’s not only government-led initiatives that concern critics, either, with union bosses concerned that algorithmic bias will impact job seekers. “Employers already use all sorts of shortcuts to sift through applications. If an algorithm was used I wouldn’t expect applicants to be told,” said Karam Bales of the National Education Union.
“It can reinforce the postcode lottery. If you are from a deprived neighbourhood you are less likely to succeed due to having access to
fewer opportunities, because algorithms would pick this up and downgrade candidates based on the area they grew up.”
Damage limitation
Experts believe the only way to prevent further damage is to create a regulator to ensure public service algorithms are opened up before they are implemented, so that civil society can inspect them. “This is all happening after the event, but the methods should be made public to all concerned and made available to people with stakes in this,” said Bernal. “They should have a chance for input before they have an impact on individual people.
“Like with the NHS tracking app, data-risk assessments need to be done in advance to see what’s going to go on. Effectively, you could have an ‘algorithmic fairness assessment’ that is made public and people can criticise it and the methods. There is no reason to keep this method secret, unless you’re afraid that if you make it public somebody is going to criticise you,” Bernal added.
“Statutory oversight would involve at least a framework, setting out when algorithms can and can’t be used, what data they can and can’t consider, what standards of transparency are needed, and would empower an independent body to investigate the procurement, deployment, and use of algorithms.”
Closed shop
The situation is exacerbated, critics argue, by the fact that much of the planning and development of systems happens behind closed doors, either in government departments or outsourced to private companies, and there’s little that anyone that’s given a raw deal can do to contest decisions. On top of that, there’s a lack of transparency because neither companies nor departments want to share the details of the systems they’ve put in place.
“Sometimes they’re proprietary
– it really depends on how they’ve been procured,” said Jennifer
Cobbe, coordinator of the Trust & Technology Initiative at the University of Cambridge. “In some cases they’ll be outsourced to a private company, who will claim that they’re confidential.
“That’s obviously incompatible with basic principles of public sector transparency and accountability. Sometimes they’re developed in-house and public bodies will try to keep them secret – which again is incompatible with basic principles of public sector transparency and accountability,” Cobbe added.