The Daily Telegraph

It shouldn’t be up to Mark Zuckerberg to set the rules on facial recognitio­n tech

-

What is Facebook without faces? The social network site said this week that it would be shutting down its facial recognitio­n tool and deleting the face templates it holds for more than a billion faces because “there are many concerns” about the technology. What’s not to like?

Well, given what we know about Facebook, it’s safe to assume this isn’t an entirely unprompted act of kindness.

The company was forced to pay a $650million settlement over facial recognitio­n in an Illinois lawsuit and given that it has already trained a powerful facial recognitio­n tool, known as Deepface, from user photos, perhaps the whole thing is simply no longer worth the trouble.

But its decision is a reminder that the whole business of mass biometric data-gathering is proceeding apace and largely unregulate­d.

Facial recognitio­n technology has been oddly uncontrove­rsial so far. This is because cameras do it passively.

But just imagine, for example, that a privately hired forensics officer was following you around as you did your shopping, dusting every product you touched for fingerprin­ts and then storing them in a private database. You would probably be at least mildly put out.

Yet this is precisely what is happening with your face. Specifical­ly, in this case, it happens in Southern Co-op shops, a chain of co-ops in southern England.

Our faces, like our gaits and our voices, are unique to each of us and can identify a person in the same way that blood and fingerprin­ts can. Yet unlike DNA or prints, facial recognitio­n data is not subject to any rules in this country about how it is stored or who can access it.

We rely on the benevolenc­e of individual police forces, councils and private companies to ensure that it isn’t misused. This seems like something of an oversight, to put it mildly.

Given the ubiquity of camera phones, anonymity is one of the few remaining elements of privacy we have left.

Facial recognitio­n might be handy in situations where a person needs to prove their identity, or to catch a criminal.

But on a systemic scale it is more malign than good, allowing companies, rogue individual­s or government­s to mark people as suspicious, profile, monitor, embarrass, blackmail, manipulate, control or develop ever-more powerful tools to market products, ideas and creeds to us as we go about our daily business.

And it will inevitably be deployed militarily and for the purposes of assassinat­ions by drones.

Increasing­ly, computer scientists are shunning the whole area. Toby Walsh, a leading Australian professor of artificial intelligen­ce, says he used to think there were positive benefits to it but now refuses to work on facial recognitio­n at all because “there are too many negatives”.

Facebook’s newly rebranded parent company Meta may be in damage-limitation mode, following leaks that reveal its knowledge of the role that its sites play in teen eating disorders, religious hatred and people traffickin­g.

But its statement announcing the end of Facebook’s facial recognitio­n tool spends many more words extolling the technology’s virtues than discussing its risks.

That is because Facebook has no intention of actually stopping facial recognitio­n. The new virtual reality “metaverse” that chief executive Mark Zuckerberg wants to build will rely on scanning faces to build avatars and it is looking at building the technology into “smart glasses” that would identify anyone the wearer sees.

So yes, the decision to delete a billion face templates is a good one. But it shouldn’t be Mr Zuckerberg’s to make.

On a systemic scale, the technology is more malign than good, allowing companies and government­s to profile, monitor and manipulate us

Newspapers in English

Newspapers from United Kingdom