IDENTITY CRISIS
In the data era, what makes you, you? The multibillion-dollar biometrics industry is changing everything we know about identity, privacy and anonymity. Steve Boggan investigates
Have you ever wondered what makes you, you? Is it a name or your ability to prove it is yours? Is it the fact that you have a card or a piece of paper with that name on it? And in a world where identities can be so easily forged or stolen, what’s to say another believable – but criminal – version of you isn’t out there right now doing unspeakable things?
These are questions that have given rise to the modern biometrics industry, a £26bn-a-year global business that is casting aside hard anachronistic analogue proofs of identity and replacing them with digitised representations of your unique physical attributes. Already, you can switch on your phone or computer just by looking at it. That’s facial recognition technology. Or you might use the “dactyloscopic” function on your computer to turn it on – put simply, you’d have your fingerprint read by a pad on your PC.
So far, so convenient.
But did you know that the way you talk is unique to you? And the way you walk? So is the shape of your earlobe and of your hand or the delicate patterns of your iris or the meandering outlines of your veins. Even the way you type can identify you.
All, or any, of these make you, you, a staggeringly wonderful product of nature and evolution – and who wouldn’t want to rely on any of these beautiful characteristics as proof of identity over a dreary passport or driving licence or ID card?
Deploying remote biometric identification in publicly accessible spaces means the end of anonymity in those places
It might surprise you, then, to learn that the use of biometrics to identify people in public is being seen as the single biggest threat to civil liberties and human rights since the Second World
War. All over the civilised world, electronic privacy campaigners, freedom advocates, computing demigods – even the pope – are calling for a moratorium on the use of facial biometrics with artificial intelligence surveillance systems, which, they argue, could see an end to anonymity as we know it. Last month the two most important bodies overseeing data protection issues in the European Union called for a ban on the use of mass surveillance systems that make use of biometrics with AI in public places; systems that can track and identify hundreds or thousands of people at once.
In a joint statement, Andrea Jelinek, chair of the European Data Protection Board, and Wojciech Wiewiorowski, the European Data Protection supervisor, said: “Deploying remote biometric identification in publicly accessible spaces means the end of anonymity in those places. Applications such as live facial recognition (LFR) interfere with fundamental rights and freedoms to such an extent that they may call into question the essence of these rights and freedoms.”
Last year, Pope Francis issued a joint statement with IBM and Microsoft calling for curbs on the use of AI with facial recognition systems. And scores of civil rights groups ranging from Amnesty International, Liberty, Big Brother Watch, the Ada Lovelace Institute, and the European Digital Rights
network to Privacy International, have launched campaigns to have mass biometric surveillance banned. So, what is this type of surveillance and why is it so controversial?
First of all, it would be useful to understand the meaning of “biometrics”. These are divided into measurable physiological and behavioural traits. Physiological traits include morphological identifiers such as fingerprints, face shape, vein patterns and so on. They also include biological identifiers such as DNA, blood or saliva. Behavioural measurements include voice recognition, gait, some gestures and even the way you sign your name.
There is nothing new in the use of biometrics in identification: as far back as the second century BC, Babylonian and Chinese rulers used fingerprints on declarations and contracts. In the mid-19th century, hand and fingerprints were used commercially as evidence of agreements, but it was not until 1902 that the first UK criminal conviction was achieved based on fingerprint evidence.
In the 1860s, telegraphy specialists using Morse code were able to name the sender of a message by the unique frequency of their dots and dashes – in much the same way as keystrokes can identify a computer user today. And, remember, even the photograph in your passport is a biometric identifier.
The modern-day difference is that these biometrics can now be produced in digital formats that computers can read at lightning speed and in massive quantities, with AI able to compare them with the contents of databases or “watch lists” before reaching conclusions about the person to whom they belong – ie you.
What privacy campaigners are concerned about is what happens when high-resolution CCTV cameras with facial recognition (or other biometric-identifying) technology and AI capabilities are positioned in public places. They can scan, record and compare the biometrics of all the people who pass by, all the time.
And once these digital identifiers are linked to your name – because over time you will need to use them to gain access to services, to pass through borders, to satisfy financial and economic checks, to prove entitlement to healthcare or benefits and so on – then all your movements, with whom you associate, which demonstrations you attend, which abortion clinic you might visit, which LGBTQ+ bars you frequent, which doctor you have an appointment with, which sex worker you drop in on, or which tryst your spouse doesn’t know about, could be visible to whoever has control of those cameras, be it law enforcement, transport providers, marketeers or the state.
“Not even the Stasi in East Germany or Gaddafi in Libya had access to that much information and power,” says Dr Daragh Murray, senior lecturer at Essex University’s Human Rights Centre and School of Law. “If you were gay in a country where that was illegal, then you’d have to change your behaviour. If you wanted to demonstrate against a totalitarian regime, you could expect to be arrested. If you just wanted to be different, you could face discrimination.
“This kind of technology could mean you’d be tracked and identified everywhere you went, all the time. It would interfere with our right to assembly, our right to privacy and our expectation of anonymity. These systems can even make
assumptions about you by the way you behave, your gender, age or ethnicity.”
A recent study by the US video surveillance research firm IPVM found that four Russian-based or backed companies, AxxonSoft, Tevian, VisionLabs and NtechLab, had developed systems able to classify faces on the basis of race. Each of the companies admitted having this capability but responded by saying variously that the ability was an accident or would be removed or was simply not being used.
However, the mere fact of systems being able to focus their attention on certain ethnicities rang alarm bells internationally because AI programmes run on algorithms created by humans, and they are used in conjunction with watchlists and databases also created by humans. And if those humans had prejudices – either conscious or unconscious – then the results could be disastrous.
“The findings underline the ugly racism baked into these systems,” says Edin Omanovic, advocacy director at Londonbased Privacy International. “Far from being benign security tools which can be abused, such tools are deeply rooted in some
of humanity’s most destructive ideas and purpose-made for discrimination.”
Anton Nazarkin, global sales director for VisionLabs, confirmed to me that its system could differentiate between white, black, Indian and Asian, not because it wanted to discriminate between these groupings, but because it had a duty to its customers to be accurate.
Imagine a government tracking everywhere you walked over the past month without your permission or knowledge
However, he said the company had never recorded a single sale of its “ethnicity estimator” function. VisionLabs also says its AI surveillance systems can determine “whether a facial expression corresponds to a broad interpretation of the display of certain emotions”. These are: anger, disgust, fear, happiness, surprise and sadness.
“A lot of stupid questions are asked about facial recognition systems because very few people understand them,” Nazarkin says. “For example, my wife could die this morning, and in the afternoon I might pass by you and smile. This doesn’t mean I’m happy. It just means I smiled at you.
“Journalists keep claiming that the Chinese have systems that can identify if someone is a Uighur in order to discriminate against them. But this isn’t true. What is true is that there are databases containing the identities of Uyghurs and these are being used with facial recognition technology.”
This, of course, is a subtle difference. AI and facial recognition don’t see a person as a Uighur, but as a specific person who happens to be a Uyghur. But what if, for example, someone is gay in a country where homosexuality is either frowned upon or illegal, and cameras catch them going into a suspected gay bar? A malign state could charge them with an offence; its oppressive apparatus could blackmail them or ruin their professional and personal lives. The same would apply to other “offences” in public, surveilled, spaces. “Yes,” says Nazarkin, “but traditionally, police have known where such venues are anyway and they used traditional methods to put them under surveillance.”
So, this just makes it easier for them? “Yes,” he replies.
In a now-famous blog post calling for curbs on facial recognition technology in 2018, Microsoft president Brad Smith said: “All tools can be used for good or ill. Even a broom can be used to sweep the floor or hit someone over the head”. So, it is only fair to point out that VisionLabs’ facial recognition technology is being used – in more than 40 countries, according to Nazarkin – to identify online child abusers and to search crowds for dangerous criminals or terrorists with murderous intent.
In his blog, Smith went on: “This technology can catalogue your photos, help reunite families, or potentially be misused and