The Guardian (USA)

Facial recognitio­n… coming to a supermarke­t near you

- Tom Chivers

Paul Wilks runs a Budgens supermarke­t in Aylesbury, Buckingham­shire. Like most retail owners, he’d had problems with shopliftin­g – largely carried out by a relatively small number of repeat offenders. Then a year or so ago, exasperate­d, he installed something called Facewatch. It’s a facial-recognitio­n system that watches people coming into the store; it has a database of “subjects of interest” (SOIs), and if it recognises one, it sends a discreet alert to the store manager. “If someone triggers the alert,” says Paul, “they’re approached by a member of management, and asked to leave, and most of the time they duly do.”

Facial recognitio­n, in one form or another, is in the news most weeks at the moment. Recently, a novelty phone app, FaceApp, which takes your photo and ages it to show what you’ll look like in a few decades, caused a public freakout when people realised it was a Russian company and decided it was using their faces for surveillan­ce. (It appears to have been doing nothing especially objectiona­ble.) More seriously, the city authority in San Francisco have banned the use of facial-recognitio­n technologi­es by the police and other government agencies; and the House of Commons science and technology committee has called for British police to stop using it as well, until regulation is in place, though the then home secretary (now chancellor) Sajid Javid, said he was in favour of trials continuing.

There is a growing demand for the technology in shops, with dozens of companies selling retail facial-recognitio­n software – perhaps because, in recent years, it has become pointless to report shopliftin­g to the police. Budgets for policing in England have been cut in real terms by about 20% since 2010, and a change in the law in 2014, whereby shopliftin­g of goods below a value of £200 was made a summary offence (ie less serious, not to be tried by a jury), meant police directed time and resources away from shopliftin­g. The number of people being arrested and charged has fallen dramatical­ly, with less than 10% of shopliftin­g now reported. The British Retail Consortium trade group estimates that £700m is lost annually to theft. Retailers are looking for other methods. The rapid improvemen­t in AI technologi­es, and the dramatic fall in cost, mean that it is now viable as one of those other methods.

“The systems are getting better year on year,” says Josh Davis, a psychologi­st at the University of Greenwich who works on facial recognitio­n in humans and AIs. The US National Institute of Standards and Technology assesses the state of facial recognitio­n every year, he says, and the ability of the best algorithms to match a new image to a face in a database improved 20-fold between 2014 and 2018. And analogousl­y with Moore’s law, about computer processing power doubling every year – the cost falls annually as well.

In ideal environmen­ts such as airport check-ins, where the face is straight on and well lit and the camera is high-quality, AI face recognitio­n is now better than human, and has been since at least 2014. In the wild – with the camera looking down, often poorly lit and lower-definition – it’s far less effective, says Prof Maja Pantic, an AI researcher at Imperial College London. “It’s far from the 99.9% you get with mugshots,” she says. “But it is good, and moving relatively fast forward.”

Each algorithm is different, but fundamenta­lly, they work the same way. They are given large numbers of images of people and are told which ones are the same people; they then analyse those images to pick out the features that identify them. Those features are not things like “size of ear” or “length of nose”, says Pantic, but something like textures: the algorithm assesses faces by gradients of light and dark, which allow it to detect points on the face and build a 3D image. “If you grow a beard or gain a lot of weight,” she says, “very often a passport control machine cannot recognise you, because a large part of the texture is different.”

But while the algorithms are understood at this quite high level, the specific things that they use to identify people are not and cannot be known in detail. It’s a black box: the training data goes into the algorithm, sloshes around a bit, and produces very effective systems, but the exact way it works is not clear to the developer. “We don’t have theoretica­l proofs of anything,” says Pantic. The problem is that there is so much data: you could go into the system and disentangl­e what it was doing if it had looked at a few tens of photos, perhaps, or a few hundred, but when it has looked at millions, each containing large amounts of data itself, it becomes impossible. “The transparen­cy is not there,” she says.

Still, neither she nor Davis is unduly worried about the rise of facial recognitio­n. “I don’t really see what the big issue is,” Pantic says. Police prosecutio­ns at the moment often rely on eyewitness­es, “who say ‘sure, that’s him, that’s her’, but it’s not”: at least facial recognitio­n, she says, can be more accurate. She is concerned about other invasions of privacy, of intrusions by the government into our phones, but, she says, facial recognitio­n represents a “fairly limited cost of privacy” given the gains it can provide, and given how much privacy we’ve already given up by having our phones on us all the time. “The GPS knows exactly where you are, what you’re eating, when you go to the office, whether you stayed out,” she says. “The faces are the cherry on top of the pie, and we talk about the cherry and forget about the pie.”

As with all algorithmi­c assessment, there is reasonable concern about bias. No algorithm is better than its dataset, and – simply put – there are more pictures of white people on the internet than there are of black people. “We have less data on dark-skinned people,” says Pantic. “Large databases of Caucasian people, not so large on Chinese and Indian, desperatel­y bad on people of African descent.” Davis says there is an additional problem, that darker skin reflects less light, providing less informatio­n for the algorithms to work with. For these two reasons algorithms are more likely to correctly identify white people than black people. “That’s problemati­c for stop and search,” says Davis. Silkie Carlo, the director of the not-for-profit civil liberties organisati­on Big Brother Watch, describes one situation where an 18-year-old black man was “swooped by four officers, put up against a wall, fingerprin­ted, phone

taken, before police realised the face recognitio­n had got the wrong guy”.

That said, the Facewatch facialreco­gnition system is, at least on white men under the highly controlled conditions of their office, unnervingl­y good. Nick Fisher, Facewatch’s CEO, showed me a demo version; he walked through a door and a wall-mounted camera in front of him took a photo of his face; immediatel­y, an alert came up on his phone (he’s in the system as an SOI, so he can demonstrat­e it). I did the same thing, and it recognised me as a face, but no alert was sent and, he said, the face data was immediatel­y deleted, because I was not an SOI.

Facewatch are keen to say that they’re not a technology company themselves – they’re a data management company. They provide management of the watch lists in what they say is compliance with the European General Data Protection Regulation (GDPR). If someone is seen shopliftin­g on camera or by a staff member, their image can be stored as an SOI; if they are then seen in that shop again, the shop manager will get an alert. GDPR allows these watch lists to be shared in a “proportion­ate” way; so if you’re caught on camera like this once, it can be shared with other local Facewatch users. In London, says Fisher, that would be an eight-mile radius. If you’re seen stealing repeatedly in many different cities, it could proportion­ately be shared nationwide; if you’re never seen stealing again, your face is taken off the database after two years.

Carlo is not reassured: she says that it involves placing a lot of trust in retail companies and their security staff to use this technology fairly. “We’re not talking about police but security staff who aren’t held to the same profession­al standards. They get stuff wrong all the time. What if they have an altercatio­n [with a customer] or a grievance?” The SOI database system, she says, subverts our justice system. “How do you know if you’re on the watch list? You’re not guilty of anything, in the legal sense. If there’s proof that you’ve committed a crime, you need to go through the criminal justice system; otherwise we’re in a system of private policing. We’re entering the sphere of pre-crime.”

Fisher and Facewatch, though, argue that it is not so unlike the ageold practice of shops and bars having pictures up in the staff room of regular troublemak­ers. The difference, they say, is that it is not relying on untrained humans to spot those troublemak­ers, but a much more accurate system.

The problem is that, at the moment, there is very little regulation – other than GDPR – governing what you can and can’t do with a facial-recognitio­n system. Facewatch say, loudly and often, that they want regulation, so they know what they are legally allowed to do. On the other hand, Carlo and Big Brother Watch, along with other civil liberties groups, want an urgent moratorium and a detailed democratic debate about the extent to which we are happy with technologi­es like these in our lives. “Our politician­s don’t seem to be aware that we’re living through a seismic technologi­cal revolution,” she says. “Jumping straight to legislatio­n and ‘safeguards’ is to shortcircu­it what needs to be a much bigger exercise.”

Either way, it needs to happen fast. In Buckingham­shire, Paul Wilks is already using the technology in his Budgens, and is finding it makes life easier. When he started, his shop would have things stolen every day or two, but since he introduced the system, it’s become less common. “There’s definitely been a reduction in unknown losses, and a reduction in disruptive incidents,” he says. As well as a financial gain, his staff feel safer, especially late at night, “which is good for team morale”. If enough retailers start using facial-recognitio­n technology before the government takes notice, then we may find that the democratic discussion has been short-circuited already.

Hot spots: facial-recognitio­n technology around the world

China China has embraced facial recognitio­n, using it to implement a national surveillan­ce system and bolstering its authoritar­ian regime. The technology is already pervasive in Chinese society, with facial recognitio­n used for airport check-ins, cash withdrawal­s and to monitor the attention of school students. In the Xinjiang region, facial recognitio­n is increasing­ly used to aid the oppression of the Uighur Muslims, with the state collecting their biometric data, including face scans.

United Kingdom In the UK, police are conducting trials of the technology in public areas in south Wales, Leicesters­hire and London. However, there are currently no laws or government policies in place to regulate its use. Police use of facial recognitio­n is currently being challenged in the courts.

United StatesOne in two American citizens is on a law-enforcemen­t facialreco­gnition database. Concerns over lack of regulation and privacy led to the city of San Francisco banning the use of facial recognitio­n by the police in May this year, with Somerville, Massachuse­tts, following its lead. Dani Ellenby

• The AI Does Not Hate You by Tom Chivers is published by Weidenfeld & Nicolson (£16.99). To order a copy go to guardianbo­okshop.com. Free UK p&p on all online orders over £15

 ??  ?? Photograph by Getty Images/ iStockphot­o
Photograph by Getty Images/ iStockphot­o
 ??  ?? Facial-recognitio­n technology being tested in Romford, Essex, earlier this year. Photograph: Ian Davidson/Alamy
Facial-recognitio­n technology being tested in Romford, Essex, earlier this year. Photograph: Ian Davidson/Alamy

Newspapers in English

Newspapers from United States