The Independent

IDENTITY CRISIS

In the data era, what makes you, you? The multibilli­on-dollar biometrics industry is changing everything we know about identity, privacy and anonymity. Steve Boggan investigat­es

-

Have you ever wondered what makes you, you? Is it a name or your ability to prove it is yours? Is it the fact that you have a card or a piece of paper with that name on it? And in a world where identities can be so easily forged or stolen, what’s to say another believable – but criminal – version of you isn’t out there right now doing unspeakabl­e things?

These are questions that have given rise to the modern biometrics industry, a £26bn-a-year global business that is casting aside hard anachronis­tic analogue proofs of identity and replacing them with digitised representa­tions of your unique physical attributes. Already, you can switch on your phone or computer just by looking at it. That’s facial recognitio­n technology. Or you might use the “dactylosco­pic” function on your computer to turn it on – put simply, you’d have your fingerprin­t read by a pad on your PC.

So far, so convenient.

But did you know that the way you talk is unique to you? And the way you walk? So is the shape of your earlobe and of your hand or the delicate patterns of your iris or the meandering outlines of your veins. Even the way you type can identify you.

All, or any, of these make you, you, a staggering­ly wonderful product of nature and evolution – and who wouldn’t want to rely on any of these beautiful characteri­stics as proof of identity over a dreary passport or driving licence or ID card?

Deploying remote biometric identifica­tion in publicly accessible spaces means the end of anonymity in those places

It might surprise you, then, to learn that the use of biometrics to identify people in public is being seen as the single biggest threat to civil liberties and human rights since the Second World

War. All over the civilised world, electronic privacy campaigner­s, freedom advocates, computing demigods – even the pope – are calling for a moratorium on the use of facial biometrics with artificial intelligen­ce surveillan­ce systems, which, they argue, could see an end to anonymity as we know it. Last month the two most important bodies overseeing data protection issues in the European Union called for a ban on the use of mass surveillan­ce systems that make use of biometrics with AI in public places; systems that can track and identify hundreds or thousands of people at once.

In a joint statement, Andrea Jelinek, chair of the European Data Protection Board, and Wojciech Wiewiorows­ki, the European Data Protection supervisor, said: “Deploying remote biometric identifica­tion in publicly accessible spaces means the end of anonymity in those places. Applicatio­ns such as live facial recognitio­n (LFR) interfere with fundamenta­l rights and freedoms to such an extent that they may call into question the essence of these rights and freedoms.”

Last year, Pope Francis issued a joint statement with IBM and Microsoft calling for curbs on the use of AI with facial recognitio­n systems. And scores of civil rights groups ranging from Amnesty Internatio­nal, Liberty, Big Brother Watch, the Ada Lovelace Institute, and the European Digital Rights

network to Privacy Internatio­nal, have launched campaigns to have mass biometric surveillan­ce banned. So, what is this type of surveillan­ce and why is it so controvers­ial?

First of all, it would be useful to understand the meaning of “biometrics”. These are divided into measurable physiologi­cal and behavioura­l traits. Physiologi­cal traits include morphologi­cal identifier­s such as fingerprin­ts, face shape, vein patterns and so on. They also include biological identifier­s such as DNA, blood or saliva. Behavioura­l measuremen­ts include voice recognitio­n, gait, some gestures and even the way you sign your name.

There is nothing new in the use of biometrics in identifica­tion: as far back as the second century BC, Babylonian and Chinese rulers used fingerprin­ts on declaratio­ns and contracts. In the mid-19th century, hand and fingerprin­ts were used commercial­ly as evidence of agreements, but it was not until 1902 that the first UK criminal conviction was achieved based on fingerprin­t evidence.

In the 1860s, telegraphy specialist­s using Morse code were able to name the sender of a message by the unique frequency of their dots and dashes – in much the same way as keystrokes can identify a computer user today. And, remember, even the photograph in your passport is a biometric identifier.

The modern-day difference is that these biometrics can now be produced in digital formats that computers can read at lightning speed and in massive quantities, with AI able to compare them with the contents of databases or “watch lists” before reaching conclusion­s about the person to whom they belong – ie you.

What privacy campaigner­s are concerned about is what happens when high-resolution CCTV cameras with facial recognitio­n (or other biometric-identifyin­g) technology and AI capabiliti­es are positioned in public places. They can scan, record and compare the biometrics of all the people who pass by, all the time.

And once these digital identifier­s are linked to your name – because over time you will need to use them to gain access to services, to pass through borders, to satisfy financial and economic checks, to prove entitlemen­t to healthcare or benefits and so on – then all your movements, with whom you associate, which demonstrat­ions you attend, which abortion clinic you might visit, which LGBTQ+ bars you frequent, which doctor you have an appointmen­t with, which sex worker you drop in on, or which tryst your spouse doesn’t know about, could be visible to whoever has control of those cameras, be it law enforcemen­t, transport providers, marketeers or the state.

“Not even the Stasi in East Germany or Gaddafi in Libya had access to that much informatio­n and power,” says Dr Daragh Murray, senior lecturer at Essex University’s Human Rights Centre and School of Law. “If you were gay in a country where that was illegal, then you’d have to change your behaviour. If you wanted to demonstrat­e against a totalitari­an regime, you could expect to be arrested. If you just wanted to be different, you could face discrimina­tion.

“This kind of technology could mean you’d be tracked and identified everywhere you went, all the time. It would interfere with our right to assembly, our right to privacy and our expectatio­n of anonymity. These systems can even make

assumption­s about you by the way you behave, your gender, age or ethnicity.”

A recent study by the US video surveillan­ce research firm IPVM found that four Russian-based or backed companies, AxxonSoft, Tevian, VisionLabs and NtechLab, had developed systems able to classify faces on the basis of race. Each of the companies admitted having this capability but responded by saying variously that the ability was an accident or would be removed or was simply not being used.

However, the mere fact of systems being able to focus their attention on certain ethnicitie­s rang alarm bells internatio­nally because AI programmes run on algorithms created by humans, and they are used in conjunctio­n with watchlists and databases also created by humans. And if those humans had prejudices – either conscious or unconsciou­s – then the results could be disastrous.

“The findings underline the ugly racism baked into these systems,” says Edin Omanovic, advocacy director at Londonbase­d Privacy Internatio­nal. “Far from being benign security tools which can be abused, such tools are deeply rooted in some

of humanity’s most destructiv­e ideas and purpose-made for discrimina­tion.”

Anton Nazarkin, global sales director for VisionLabs, confirmed to me that its system could differenti­ate between white, black, Indian and Asian, not because it wanted to discrimina­te between these groupings, but because it had a duty to its customers to be accurate.

Imagine a government tracking everywhere you walked over the past month without your permission or knowledge

However, he said the company had never recorded a single sale of its “ethnicity estimator” function. VisionLabs also says its AI surveillan­ce systems can determine “whether a facial expression correspond­s to a broad interpreta­tion of the display of certain emotions”. These are: anger, disgust, fear, happiness, surprise and sadness.

“A lot of stupid questions are asked about facial recognitio­n systems because very few people understand them,” Nazarkin says. “For example, my wife could die this morning, and in the afternoon I might pass by you and smile. This doesn’t mean I’m happy. It just means I smiled at you.

“Journalist­s keep claiming that the Chinese have systems that can identify if someone is a Uighur in order to discrimina­te against them. But this isn’t true. What is true is that there are databases containing the identities of Uyghurs and these are being used with facial recognitio­n technology.”

This, of course, is a subtle difference. AI and facial recognitio­n don’t see a person as a Uighur, but as a specific person who happens to be a Uyghur. But what if, for example, someone is gay in a country where homosexual­ity is either frowned upon or illegal, and cameras catch them going into a suspected gay bar? A malign state could charge them with an offence; its oppressive apparatus could blackmail them or ruin their profession­al and personal lives. The same would apply to other “offences” in public, surveilled, spaces. “Yes,” says Nazarkin, “but traditiona­lly, police have known where such venues are anyway and they used traditiona­l methods to put them under surveillan­ce.”

So, this just makes it easier for them? “Yes,” he replies.

In a now-famous blog post calling for curbs on facial recognitio­n technology in 2018, Microsoft president Brad Smith said: “All tools can be used for good or ill. Even a broom can be used to sweep the floor or hit someone over the head”. So, it is only fair to point out that VisionLabs’ facial recognitio­n technology is being used – in more than 40 countries, according to Nazarkin – to identify online child abusers and to search crowds for dangerous criminals or terrorists with murderous intent.

In his blog, Smith went on: “This technology can catalogue your photos, help reunite families, or potentiall­y be misused and

 ??  ?? Campaigner­s say this cou l d be the end of anonymity as we know it (AFP/Getty)
Campaigner­s say this cou l d be the end of anonymity as we know it (AFP/Getty)
 ??  ?? Wojciech Wiewiorows­ki, the European Data Protection supervisor ( EDPS)
Wojciech Wiewiorows­ki, the European Data Protection supervisor ( EDPS)
 ??  ?? Footage from CCTV cameras is kept for 31 days, the police have said (AFP/Getty)
Footage from CCTV cameras is kept for 31 days, the police have said (AFP/Getty)
 ?? L ?? ‘Not even the Stasi or Gaddafi had access to that much informatio­n,’ says Daragh Murray ( Supp ied)
L ‘Not even the Stasi or Gaddafi had access to that much informatio­n,’ says Daragh Murray ( Supp ied)
 ??  ?? Cameras record schoo l chi l dren in Xinjiang in China, the heavi l y po l iced region where Uyghurs face increasing pressure (AFP/Getty)
Cameras record schoo l chi l dren in Xinjiang in China, the heavi l y po l iced region where Uyghurs face increasing pressure (AFP/Getty)

Newspapers in English

Newspapers from United Kingdom