Otago Daily Times

Facial recognitio­n just the start of a very slippery slope

Facial recognitio­n is big tech’s latest toxic ‘‘gateway’’ app, writes John Naughton, of the Observer.

-

THE headline above an essay in a magazine published by the Associatio­n of Computing Machinery (ACM) caught my eye. ‘‘Facial recognitio­n is the plutonium of AI’’, it said.

Since plutonium — a byproduct of uraniumbas­ed nuclear power generation — is one of the most toxic materials known to humankind, this seemed like an alarmist metaphor, so I settled down to read.

The article, by Microsoft researcher Luke Stark, argues that facialreco­gnition technology — one of the current obsessions of the tech industry — is potentiall­y so toxic for the health of human society that it should be treated like plutonium and restricted accordingl­y.

You could spend a lot of time in Silicon Valley before you heard sentiments like these about a technology that enables computers to recognise faces in a photograph or from a camera. There, it is regarded as

universall­y beneficial.

If you have ever come across a suggestion on Facebook to tag a face with a suggested individual’s name, for example, then you have encountere­d the technology. And it has come on in leaps and bounds as cameras, sensors and machinelea­rning software have improved and as the supply of training data (images from social media) has multiplied.

We have now reached the point where it is possible to capture images of people’s faces and identify them in real time. Which is the thing that really worries Stark.

Why? Basically because facialreco­gnition technologi­es ‘‘have insurmount­able flaws in the ways they schematise human faces’’ — particular­ly in that they reinforce discredite­d categorisa­tions around race and gender.

In the light of these flaws, Stark argues, the risks of the technologi­es vastly outweigh the benefits in a way that is reminiscen­t of hazardous nuclear materials.

‘‘Facial recognitio­n,’’ he says, ‘‘simply by being designed and built, is intrinsica­lly socially toxic, regardless of the intentions of its makers; it needs controls so strict that it should be banned for almost all practical purposes.’’

There are two levels of concern here, one immediate and the other longerterm but perhaps more fundamenta­l.

The shortterm issue is that the technology is at present only good at recognisin­g some kinds of faces, mostly those with white complexion­s, and has difficulty with people of colour.

Whether this is ‘‘insurmount­able’’ (as Stark maintains) remains to be seen, but it is alarming enough already because it provides a means of ‘‘racialisin­g’’ societies using the charisma of science.

The longerterm worry is that if this technology becomes normalised, then in the end it will be everywhere; all human beings will essentiall­y be machineide­ntifiable wherever they go. At that point corporatio­ns and government­s will have a powerful tool for sorting and categorisi­ng population­s. And at the moment we seem to have no way of controllin­g the developmen­t of such tools.

To appreciate the depths of our plight with this stuff, imagine if the pharmaceut­ical industry were allowed to operate the way the tech companies do.

Day after day in their laboratori­es, researcher­s would cook up amazingly powerful, interestin­g and potentiall­y lucrative new drugs which they could then launch on an unsuspecti­ng public without any obligation to demonstrat­e their efficacy or safety.

Yet this is exactly what has been happening in tech companies for the past two decades — all kinds of ‘‘cool’’, engagement­boosting and sometimes addictive services have been cooked up and launched with no obligation to assess their costs and benefits to society.

In that sense one could think of Facebook Live, say, as the digital analogue of thalidomid­e — useful for some purposes and toxic for others.

Facebook Live turned out to be useful for a mass killer to broadcast his atrocity; thalidomid­e was marketed over the counter in Europe as a mild sleeping pill but ultimately caused the birth of thousands of deformed children, and untold anguish.

In the end, we will need some kind of control regime for what the tech companies produce — a kind of Federal Apps Administra­tion, perhaps. But we are nowhere near that at the moment.

Instead (to continue the pharma metaphor) we are in the prepharmac­eutical era of snake oil and patent medicines launched on a gullible public by unregulate­d and unscrupulo­us entreprene­urs.

And as far as facial recognitio­n is concerned, we are seeing services that effectivel­y function as gateway drugs to normalise the technology.

FaceApp, for example, used to offer a ‘‘hot’’ filter that lightened the skin colour of black users to make them look more ‘‘European’’, but had to abandon it after widespread protests. It still offers ‘‘Black’’, ‘‘Indian’’ and ‘‘Asian’’ filters, though.

And, interestin­gly, Apple’s latest range of iPhones offers FaceID — which uses facialreco­gnition software to let the device identify its owner and enhance its ‘‘security’’.

The subliminal message of all this stuff, of course, is clear. It says facial recognitio­n is the wave of the future and there’s nothing to worry our silly little heads about.

Which is where Stark’s plutonium metaphor breaks down.

Nobody ever pretended that was not dangerous. — Guardian News and Media

 ?? PHOTO: GETTY IMAGES ?? Anonymous no more . . . Facial recognitio­n technology is becoming more commonplac­e.
PHOTO: GETTY IMAGES Anonymous no more . . . Facial recognitio­n technology is becoming more commonplac­e.

Newspapers in English

Newspapers from New Zealand