Machines can never detect the nuances of human mannerisms
Airports tend to leave one look plastered on my face – that of insolent boredom. When quizzed, whether in the check-in queue, at passport control or even customs, my overwhelming feeling is one of thoughtless impatience. Can’t they tell I’m not the bloody enemy? Lord only knows what my face is doing at that point, but I’m pretty sure it’s nothing that looks particularly honest. In fact, I’m pretty sure that the more irritated I am with stupid procedures, the shiftier I look.
Which is why, though in general I’m in favour of AI, the suggestion last week that soon it will be computers, rather than security staff, reading travellers’ expressions to determine who is lying was far from cheering.
American researchers used a machine-learning program to analyse a large sample of liars’ faces alongside the expressions of those being honest. Apparently, now we know, thanks to AI, that liars smile with their eyes and the honest (unsmilingly) scrunch their eyes.
I can think of countless instances of people looking one way and thinking another. And, no doubt, once we all read about what the authorities are now looking for with their little AI helpers, liars will be sure not to smile with their eyes. The fact is that, to date, only a human can read context
properly and, most importantly, can sense rather than see that something is off.
There’s a political correctness angle to this project, too; it’s hoped that AI expression-reading will reduce the need for “racial profiling”. What this actually means is that we plan to put machines in place that are certain to confuse matters, waste time and perhaps even imperil security, in part so that we don’t offend cultural sensibilities.
We don’t need more AI in airports, we need to invest more in training staff – not to be politically correct, but to read people properly.
In this, we should copy Israel, whose ornate system of (human) questioning and interpretation actually works.