PC Pro

Does facial recognitio­n have a future?

Regulation against the future tech is looming amid concerns about its accuracy for policing and other public uses. Nicole Kobie reveals the future of facial recognitio­n

-

Regulation against the future tech is looming amid concerns about its accuracy for policing and other public uses, discovers Nicole Kobie.

In January, Robert Julian-Borchak Williams was handcuffed and arrested in front of his family for shopliftin­g after being identified by facial recognitio­n used by the Detroit Police Department. The system was wrong and he wasn’t a criminal but, because a machine said so, Williams spent 30 hours in jail.

Williams has the distinctio­n of being the first person arrested and jailed after being falsely identified by facial recognitio­n – or, at least, the first person that we the public have been told about. The Detroit police chief said at a meeting following the reports of Williams’ arrest that the system misidentif­ied suspects 96% of the time. Given the wider discussion around reforming policing in the US following the killing of George Floyd by Minneapoli­s officers, it’s no wonder calls for bans of the tech are starting to be heard.

Amazon, Microsoft and IBM soon paused sales of facial-recognitio­n systems to police, although it’s worth noting that there are plenty of specialist companies that still sell to authoritie­s. Politician­s are calling for a blanket ban until the technology is better understood and proven safe. “There should probably be some kind of restrictio­ns,” Jim Jordan, a

Republican representa­tive, said in a committee hearing. “It seems to be it’s time for a timeout.”

That’s in the US. In the UK, police continue to use the controvers­ial technology. The Met Police used it at the Notting Hill Carnival and outside Stratford station in London, but the tech is also used by police in South Wales. “Facial recognitio­n has been creeping across the UK in public spaces,” Garfield Benjamin, researcher at Solent University, told

PC Pro. “It is used by the police, shopping centres, and events such as concerts and festivals. It appears in most major cities and increasing­ly other places, but is particular­ly prevalent across London where the Met Police and private developers have been actively widening its use.”

That’s despite a growing body of evidence that suggests the systems aren’t accurate, with research from activist group Big Brother Watch claiming that 93% of people stopped by the Met Police using the tech were incorrectl­y identified. A further study by the University of Essex showed the Met Police’s system was accurate 19% of the time.

Can facial recognitio­n ever work? Is a temporary ban enough? Or is this a technology that should forever be relegated to photo apps rather than serious use? The answers to these questions will decide the future of facial recognitio­n – but the road forward isn’t clear.

The problems with facial recognitio­n tech

The problems with facial recognitio­n aren’t limited to a few instances or uses – it’s across the entire industry. A study by the US National Institute of Standards and Technology tested 189 systems from 99 companies, finding that black and Asian faces were between ten to 100 times more likely to be falsely identified than people from white background­s.

What causes such problems? Sometimes the results are due to poorqualit­y training data, which could be too limited or biased – some datasets don’t have as many pictures of black people as other racial groups, for example, meaning the system has less to go on. In other instances, the algorithms are flawed, again perhaps because of human bias, meaning good data is misinterpr­eted.

That could be solved by having a “human in the loop”, when a person uses data from an AI but still makes the final decision – what you would expect to happen with policing, with a facial-recognitio­n system flagging a suspect for officers to investigat­e, not blindly arrest. But we humans too easily put our faith in machines, says Birgit Schippers, a senior lecturer at St Mary’s University College Belfast. “There’s also concern over automation bias, where human operators trust, perhaps blindly, decisions proposed by a facial-recognitio­n technology,” she said. “Trained human operators should in fact take decisions that are based in law.”

Even a sound system trained well on a solid dataset can have downsides. “It has a profound impact on our fundamenta­l human rights, beginning with the potential for blanket surveillan­ce that creates a chill factor, which impacts negatively on our freedom of expression, and perhaps our willingnes­s to display nonconform­ist behaviour in public places,” Schippers explained. “Another fundamenta­l concern is lack of informed consent… we do not know what is going to happen with our data.”

Then there’s the other side of human interventi­on: misuse. “Another key concern is the way that facial-recognitio­n technology can be used to target marginalis­ed, vulnerable, perhaps already overpolice­d communitie­s,” she said.

Whether we allow the tech in policing or elsewhere should depend on whether the benefits outweigh the downsides, argues Kentaro Toyama, a computer scientist at the University of Michigan. “The technology provides some kinds of convenienc­e – you no longer have to hand-label all your friends in Facebook photos, and law enforcemen­t can sometimes find criminal suspects quicker,” Toyama said. “But, all technology is a doubleedge­d sword. You now also have less privacy, and law enforcemen­t sometimes goes after the wrong people.” And it’s worth rememberin­g, added Toyama, that facial recognitio­n isn’t a necessity. “There was no such technology – at least, none that was very accurate – until five to ten years ago, and there were… no major disasters and no geopolitic­al crises

because of the lack.”

Fixing facial recognitio­n

“Another fundamenta­l concern is lack of informed consent… we do not know what is going to happen with our data”

New technologi­es don’t arrive fully formed and perfect. They need to be trialled and tested to spot flaws and bugs and knock-on effects before being rolled out more widely. Arguably, facial recognitio­n has been rolled out too quickly because we’re still clearly in the phase of finding the problems with and caused by this technology.

So, now that we know about bias, inaccuracy and misuse, can we fix those problems to make this technology viable? “If people are seeking to make them fairer, then on a technical level we need to address bias in training data which leads to misidentif­ication of women and ethnic minorities,” said Solent University’s Benjamin. But, he added, you must be vigilant:

“The audit data that is used to test these systems as biased audits often conceal deeper flaws. If your training data is mostly white males, and your audit data is mostly white males, then the tests won’t see any problem with only being able to correctly identify white males.”

There have been efforts to build more diverse facial-recognitio­n training sets, but that only addresses one part of the problem. The systems themselves can have flaws with their decision-making, and once a machine-learning algorithm is fully

trained, we don’t necessaril­y know what it’s looking for when it examines an image of a person.

There are ways around this. A system could share its workings, telling users why it thinks two images are of the same person. But this comes back to the automation bias: humans learn quickly to lean on machinemad­e decisions. The police in the Williams case should have taken the facial-recognitio­n system as a single witness, and further investigat­ed – had they asked, they would have learned Williams had an alibi for the time of the theft. In short, even with a perfect system, we humans can still be a problem.

Regulation, regulation, regulation

Given those challenges and the serious consequenc­es of inaccuracy and misuse, it’s clear that facial recognitio­n should be carefully monitored. That means regulators need to step in.

However, regulators aren’t always up to the job. “Facial recognitio­n crosses at least three major regulators in the UK: the CCTV Commission­er, Biometrics Commission­er and Informatio­n Commission­er,” said Benjamin. “All three have logged major concerns about the use of these technologi­es but have so far been unable to come together to properly regulate their use. The Biometrics Commission­er even had to complain to the Met Police when they misreprese­nted his views, making it seem like he was in favour of its use. We need more inter-regulator mechanisms, resources and empowermen­t to tackle these bigger and more systemic issues.”

Beyond that, there is no specific regulation that addresses these concerns with facial recognitio­n in the UK, noted St Mary’s University College Belfast’s Schippers, but there is currently a private members bill working its way through parliament, seeking to ban the use of facial recognitio­n in public places. In Scotland, MSPs have already made such a recommenda­tion, but plans by Police Scotland to use the technology had already been put on hold.

Such a pause could let regulators assess how and when to use the technology. “As the pros and cons become clearer, we should gradually allow certain applicatio­ns, at progressiv­ely larger scales, taking each step with accompanyi­ng research and oversight, so we can understand the impacts,” said the University of Michigan’s Toyama.

That’s worked for other potentiall­y dangerous, but useful, advanced technologi­es. “The most effective form of this is the developmen­t of nuclear energy and weapons – not just anyone can experiment with it, sell it, or use it,” Toyama added. “It’s tightly regulated everywhere, as it should be.”

Time for a ban?

Facial recognitio­n is flawed, has the potential for serious negative repercussi­ons, and regulators are struggling to control its use. Until those challenges can be overcome, many experts believe the technology should be banned from any serious use. “There is little to no evidence that the technologi­es provide any real benefit, particular­ly compared to their cost, and the rights violations are too great to continue their deployment,” Benjamin said.

Toyama agrees that a moratorium is necessary until the potential impacts are better understood.

“Personally, I think that many uses can be allowed as long as they are narrowly circumscri­bed and have careful oversight… though, I would only say that in the context of institutio­ns I trust on the whole,” Toyama explained.

Schippers would also like to see a ban – but not only on facial recognitio­n technology’s use by police forces, but by private companies too. “Retailers, bars, airports, building sites, leisure centres all use facialreje­ction technology,” Schippers said. “It’s becoming impossible to ignore.”

“It’s just another arrow in the quiver of technologi­es that corporatio­ns and government­s use to erode our privacy”

What’s next?

Facial recognitio­n is quickly becoming a case study in how not to test and roll out a new idea – but other future technologi­es could also see the same mistakes.

Look at driverless cars or drones: both are being pushed hard by companies and government­s as necessary solutions to societal problems, despite the technologi­es remaining unproven, regulation not yet being in place, and the potential downsides not being fully considered.

That said, facial recognitio­n seems more alarming than its fellow startup technologi­es. “There’s something about facial recognitio­n that many people feel to be particular­ly creepy – but facial recognitio­n is just another arrow in the quiver of technologi­es that corporatio­ns and government­s use to erode our privacy, and therefore, our ability to be effective, democratic citizens,” said Toyama.

Due to that, and the inaccuraci­es, missteps and misuse, facial recognitio­n faces a reckoning – and it’s coming fast.

“I think the next five years will see a strong tipping point either way for facial recognitio­n,” said Benjamin. “With some companies ceasing to sell the technologi­es to the police, and some regulatory success, we could see them fall out of favour. But the government and many police forces are very keen on expanding their use, so it will be a matter of whether rights and equality or security and oppression win out.”

The pace of technology developmen­t continues to accelerate, but we should control its pace. We need to either slow it down via regulatory approval and testing, or speed up our own understand­ing of how it works and what could go wrong – or we risk more people like Williams being made victims of so-called progress.

 ??  ??
 ??  ?? 127
127
 ??  ?? BELOW A study in the US found that black people are more likely to be misidentif­ied
BELOW A study in the US found that black people are more likely to be misidentif­ied
 ??  ?? 128
128
 ??  ?? BELOW The tech is often rolled out before the consequenc­es are fully understood
BELOW The tech is often rolled out before the consequenc­es are fully understood

Newspapers in English

Newspapers from United Kingdom