Facial recognition just the start of a very slippery slope
Facial recognition is big tech’s latest toxic ‘‘gateway’’ app, writes John Naughton, of the Observer.
THE headline above an essay in a magazine published by the Association of Computing Machinery (ACM) caught my eye. ‘‘Facial recognition is the plutonium of AI’’, it said.
Since plutonium — a byproduct of uraniumbased nuclear power generation — is one of the most toxic materials known to humankind, this seemed like an alarmist metaphor, so I settled down to read.
The article, by Microsoft researcher Luke Stark, argues that facialrecognition technology — one of the current obsessions of the tech industry — is potentially so toxic for the health of human society that it should be treated like plutonium and restricted accordingly.
You could spend a lot of time in Silicon Valley before you heard sentiments like these about a technology that enables computers to recognise faces in a photograph or from a camera. There, it is regarded as
universally beneficial.
If you have ever come across a suggestion on Facebook to tag a face with a suggested individual’s name, for example, then you have encountered the technology. And it has come on in leaps and bounds as cameras, sensors and machinelearning software have improved and as the supply of training data (images from social media) has multiplied.
We have now reached the point where it is possible to capture images of people’s faces and identify them in real time. Which is the thing that really worries Stark.
Why? Basically because facialrecognition technologies ‘‘have insurmountable flaws in the ways they schematise human faces’’ — particularly in that they reinforce discredited categorisations around race and gender.
In the light of these flaws, Stark argues, the risks of the technologies vastly outweigh the benefits in a way that is reminiscent of hazardous nuclear materials.
‘‘Facial recognition,’’ he says, ‘‘simply by being designed and built, is intrinsically socially toxic, regardless of the intentions of its makers; it needs controls so strict that it should be banned for almost all practical purposes.’’
There are two levels of concern here, one immediate and the other longerterm but perhaps more fundamental.
The shortterm issue is that the technology is at present only good at recognising some kinds of faces, mostly those with white complexions, and has difficulty with people of colour.
Whether this is ‘‘insurmountable’’ (as Stark maintains) remains to be seen, but it is alarming enough already because it provides a means of ‘‘racialising’’ societies using the charisma of science.
The longerterm worry is that if this technology becomes normalised, then in the end it will be everywhere; all human beings will essentially be machineidentifiable wherever they go. At that point corporations and governments will have a powerful tool for sorting and categorising populations. And at the moment we seem to have no way of controlling the development of such tools.
To appreciate the depths of our plight with this stuff, imagine if the pharmaceutical industry were allowed to operate the way the tech companies do.
Day after day in their laboratories, researchers would cook up amazingly powerful, interesting and potentially lucrative new drugs which they could then launch on an unsuspecting public without any obligation to demonstrate their efficacy or safety.
Yet this is exactly what has been happening in tech companies for the past two decades — all kinds of ‘‘cool’’, engagementboosting and sometimes addictive services have been cooked up and launched with no obligation to assess their costs and benefits to society.
In that sense one could think of Facebook Live, say, as the digital analogue of thalidomide — useful for some purposes and toxic for others.
Facebook Live turned out to be useful for a mass killer to broadcast his atrocity; thalidomide was marketed over the counter in Europe as a mild sleeping pill but ultimately caused the birth of thousands of deformed children, and untold anguish.
In the end, we will need some kind of control regime for what the tech companies produce — a kind of Federal Apps Administration, perhaps. But we are nowhere near that at the moment.
Instead (to continue the pharma metaphor) we are in the prepharmaceutical era of snake oil and patent medicines launched on a gullible public by unregulated and unscrupulous entrepreneurs.
And as far as facial recognition is concerned, we are seeing services that effectively function as gateway drugs to normalise the technology.
FaceApp, for example, used to offer a ‘‘hot’’ filter that lightened the skin colour of black users to make them look more ‘‘European’’, but had to abandon it after widespread protests. It still offers ‘‘Black’’, ‘‘Indian’’ and ‘‘Asian’’ filters, though.
And, interestingly, Apple’s latest range of iPhones offers FaceID — which uses facialrecognition software to let the device identify its owner and enhance its ‘‘security’’.
The subliminal message of all this stuff, of course, is clear. It says facial recognition is the wave of the future and there’s nothing to worry our silly little heads about.
Which is where Stark’s plutonium metaphor breaks down.
Nobody ever pretended that was not dangerous. — Guardian News and Media