The Independent

ABOUT-FACE

Facial recognitio­n technology trials by police are raising concerns about its impact on human rights. Tighter regulation, say Joe Purshouse and Liz Campbell, is critical

-

Automated facial recognitio­n technology has been used at a number of crowd events in England and Wales over the past two years to identify suspects and prevent crime. The technology can recognise people by comparing their facial features in real time with an image already stored on a “watch list”, which could be from a police database or social media account.

Such technology is becoming increasing­ly popular for police forces around the world. Where successful, it can have positive and headline-grabbing effects – for example tracing missing children in India. But facial recognitio­n technology is controvers­ial, with research showing that it can be inaccurate and

discrimina­tory. San Francisco is even considerin­g a complete ban on its use by police.

Several British police forces have ongoing facial recognitio­n trials. Our new research into the legal challenges posed by police use of facial recognitio­n technology suggests that, from the data made publicly available, arrest rates are low and are far outweighed by the number of incorrect matches made in live public surveillan­ce operations. This creates a risk that innocent people may be stopped and searched, which may be a daunting experience.

People were identified on criteria relating to mental ill-health, raising concerns that the technology was used in a discrimina­tory manner

Such trials are also costly. South Wales Police received a £2.6m government grant to test technology, and, so far, the Metropolit­an Police has spent more than £200,000 on its on-going trial.

Police have also been criticised for questionab­le practices in the use of facial recognitio­n technology. The Metropolit­an Police built and used a watch list of “fixated individual­s” on Remembranc­e Sunday in 2017. Reports suggest these people were identified, in some cases, on criteria relating to mental ill-health, raising concerns that the technology was used in a discrimina­tory manner.

In June 2017 at the Uefa Champions League final in Cardiff, South Wales Police reportedly deployed facial recognitio­n technology using low-quality images provided by the European football governing body and the system produced more than 2,000 false positive matches. Its accuracy improved in subsequent deployment­s, but false positive matches still frequently outnumber successful identifica­tions.

Impact on human rights

When justifying their use of facial recognitio­n technology in terms of its effectiven­ess in crime control and prevention, senior police figures tend to suggest they are mindful of human rights concerns, and that their deployment­s of the technology are lawful and proportion­ate. However, the courts have not yet tested these claims, and parliament has not debated the appropriat­e limits of this technology by police.

Facial recognitio­n technology breaches social norms of acceptable conduct in a public space. When in public, we might expect to be subject to a passing glance from others, including police officers. But we expect to be free from sustained or intensive scrutiny, involving cross-referencin­g back to our social media

feeds. Facial recognitio­n technology allows police to extract such personal informatio­n from us and use this informatio­n in ways we cannot control.

The limited independen­t testing and research that has been done so far into facial recognitio­n technology indicate that numerous systems misidentif­y ethnic minorities and women at higher rates than the rest of the population.

South Wales Police has suggested, without publishing a detailed statistica­l breakdown, that its system does not suffer from these drawbacks. Despite calls for rigorous testing on the performanc­e of facial recognitio­n systems from the scientific community, the Metropolit­an Police has not published how its system has performed relative to the gender, ethnicity or age of those subject to its use. This creates a risk that minority groups, who are already arrested at much higher rates than white people, will be further overpolice­d following false positive matches.

Need for tighter regulation

As questions over its accuracy remain, it’s too early for police to be using facial recognitio­n technology surveillan­ce in live policing operations. Accuracy isn’t the only issue with the technology though, and as it improves it’s important to think about how facial recognitio­n technology should be regulated.

While police deployment­s of facial recognitio­n technology must comply with the Data Protection Act 2018, and the Surveillan­ce Camera Code of Practice, these legal regimes don’t provide guidelines or rules specifical­ly regulating its use by police. As a result, the regulatory framework gives little indication or guidance about the proper threshold at which inclusion on a watch list is lawful.

Use should be limited, focusing only on serious crimes or threats to public safety, rather than being used as pervasivel­y as public CCTV currently is

In their trials, police forces have been collecting, comparing and storing data in different ways. The UK’s Informatio­n Commission­er expressed concern last year about the absence of national-level coordinati­on and a comprehens­ive governance framework to oversee facial recognitio­n deployment.

Most images used to populate watch lists are gathered from police databases, often from when people are taken into custody. There is a particular risk that people with old and minor conviction­s, or even those who have been arrested or investigat­ed but have no conviction­s at all, may find themselves stigmatise­d through facial recognitio­n surveillan­ce.

Given the impact of facial recognitio­n technology on human rights, its use by police should be limited, focusing only on serious crimes or threats to public safety, rather than being used as pervasivel­y as public CCTV currently is.

Inconsiste­nt practices between police forces also suggest the need for a narrower regulatory framework. This should keep the size of watch lists small and improve the quality requiremen­ts of technology systems and the way images are compiled and stored for watch lists.

As some police forces have already begun to embrace facial recognitio­n surveillan­ce, legislator­s must keep pace so that human rights are respected.

Joe Purshouse is a lecturer in criminal law at the University of East Anglia and Liz Campbell is a Francine McNiff professor of criminal jurisprude­nce at Monash University. This article first appeared on The Conversati­on

 ??  ?? Are you being watched?
Are you being watched?
 ?? (PA) ?? Fans enter the stadium for the 2017 Uefa Champions League final in the Welsh capital
(PA) Fans enter the stadium for the 2017 Uefa Champions League final in the Welsh capital

Newspapers in English

Newspapers from United Kingdom