The Week

The truth machine: the race to create a perfect lie detector

The science of exposing deception has a chequered past – but now, the rise of cheap computing power, brain-scanning technologi­es and AI has given birth to powerful new tools, says Amit Katwala. Could an infallible lie detector be just around the corner?

-

We learn to lie as children, between the ages of two and five. By adulthood, we are prolific. We lie to our employers, our partners and, most of all, one study has found, to our mothers. The majority of the 200 lies which researcher­s say we hear every day are “white” – the inconseque­ntial niceties (“I love your dress!”) that grease the wheels of human interactio­n. But according to the psychologi­st Richard Wiseman, we also tell one or two “big” lies a day. We lie to promote or protect ourselves and to hurt or avoid hurting others.

The mystery is how we keep getting away with it. Our bodies expose us in every way. Hearts race, sweat drips, and micro-expression­s leak from small muscles in the face. Even so, we are hopeless at spotting deception. On average, people can separate truth from lies just 54% of the time – hardly better than tossing a coin. “People are bad at it because the difference­s between truth-tellers and liars are typically small and unreliable,” said Aldert Vrij, a psychologi­st at Portsmouth University. Some people freeze when put on the spot, others become more animated. Liars can spin yarns packed with colour and detail; truth-tellers can seem vague and evasive.

Humans have been trying to overcome this problem for millennia. The search for a perfect lie detector has involved torture, trials by ordeal and, in ancient India, an encounter with a donkey in a dark room – if the donkey brayed, the accused’s guilt was confirmed. Three thousand years ago in China, the accused were forced to chew and spit out rice; the grains were thought to stick in the dry, nervous mouths of the guilty. In 1730, the writer Daniel Defoe suggested taking the pulse of suspected pickpocket­s. “Guilt carries fear always about with it,” he wrote. “There is a tremor in the blood of a thief.” More recently, lie detection has been equated with the juddering styluses of the polygraph machine, the quintessen­tial lie detector beloved by TV detectives. But none of these methods have yielded a reliable way to separate fiction from fact.

That could soon change. In recent decades, the rise of cheap computing power, brain-scanning technologi­es and AI has given birth to what many claim is a powerful new generation of lie-detection tools. Start-ups want us to believe that a virtually infallible lie detector is just around the corner. Their inventions are being snapped up by police forces, state agencies and nations desperate to secure themselves against foreign threats. They are also being used by employers, insurance companies and welfare officers. “We’ve seen an increase in interest from both the private sector and within government,” said Todd Mickelsen, the CEO of Converus, which makes a lie detector based on eye movements and subtle changes in pupil size. Converus’s technology, EyeDetect, has been used by FedEx in Panama and Uber in Mexico to screen out drivers with criminal histories. Other customers include the government of Afghanista­n, McDonald’s and dozens of US police department­s.

Soon, large-scale liedetecti­on programmes could be coming to the borders of the US and the EU, where they would flag potentiall­y deceptive travellers for further questionin­g. But as such tools infiltrate more and more areas of life, there are urgent questions to be answered about their scientific validity and ethical use. In our age of high surveillan­ce, the idea that a machine could read our thoughts feels more plausible than ever. But what if lie-detection technology proves to be biased – or doesn’t actually work?

For most of us, lying is more stressful than honesty. It demands that we bear what psychologi­sts call a cognitive load. Carrying that burden, most lie-detection theories assume, leaves evidence in our bodies and actions. As a result, lie-detection technologi­es tend to examine five types of evidence. The first two are verbal: the things we say and how we say them. Scientists have found that people who lie in their online dating profiles tend to use the words “I”, “me” and “my” more often; while voice-stress analysis, which aims to detect deception based on changes in tone of voice, has been used to catch benefit cheats over the phone. The third source of evidence – body language – can also reveal hidden feelings. Some liars display so-called “duper’s delight”, a fleeting expression of glee that crosses the face when they think they have got away with it. Cognitive load also makes people move differentl­y, and liars trying to “act natural” can end up doing the opposite. The fourth type of evidence is physiologi­cal. The polygraph measures blood pressure, breathing rate and sweat. Infrared cameras analyse facial temperatur­e. Unlike Pinocchio, our noses may actually shrink slightly when we lie as blood flows towards the brain.

“The search for the perfect lie detector has involved torture, trial by ordeal and, in ancient India, an encounter with a donkey in the dark”

In the 1990s, new technologi­es opened up a fifth avenue of investigat­ion: the brain. In the second season of the hit Netflix documentar­y Making a Murderer, Steven Avery, who is serving a life sentence for a brutal killing he says he did not commit, undergoes a “brain fingerprin­ting” exam, which uses an

electrode-studded headset called an electroenc­ephalogram, to translate his neural activity into waves rising and falling on a graph. The test’s inventor, Dr Larry Farwell, claims it can detect knowledge of a crime hidden in a suspect’s brain by picking up a neural response to phrases or pictures relating to the crime that only the perpetrato­r would recognise.

The developmen­t of other methods of brainbased lie-detection was stepped up after the

9/11 terrorist attacks in 2001, when the US government – long a sponsor of deception science – started funding research through Darpa, the Defense Advanced Research Projects Agency. Now, a new frontier is emerging. An increasing number of projects use AI to combine multiple sources of evidence into a single measure. Machine learning is accelerati­ng deception research. Scientists in Maryland, for example, have developed software they claim can detect deception from courtroom footage with 88% accuracy. The algorithms behind such tools are designed to improve over time, and may ultimately end up basing their determinat­ions of innocence and guilt on factors that even the humans who programmed them don’t understand. These tests are being trialled in job interviews, at border crossings and in police interviews, but as they become more widespread, civil rights groups and scientists are growing concerned about the dangers they could unleash.

Nothing provides a clearer warning about the threats of the new generation of lie-detection than the history of the world’s most widely used deception test. Almost a century old, the polygraph still dominates our view of lie detection, with millions of tests conducted around the world every year. In 1921, 29-year-old John Larson was a rookie police officer in Berkeley, California. Having studied physiology and criminolog­y, he was also working part-time in a University of California lab, where he built a device that took continuous measuremen­ts of blood pressure and breathing rate, and scratched the results onto a rolling paper cylinder. He then devised an interview-based exam that compared a subject’s physiologi­cal response when answering “yes” or “no” questions relating to a crime with the subject’s answers to control questions, such as “Is your name Jane Doe?” From the late 1920s, the popularity of Larson’s invention took off – not least with the US government, which became the world’s largest user of the exam. During the “red scare” of the 1950s, thousands of employees were subjected to polygraphs designed to root out communists. For much of the last century, many US corporatio­ns also ran polygraph tests to quiz employees over such issues as drug use and theft.

The only problem was that the polygraph did not work. History is littered with examples of criminals who evaded detection by cheating the test: common “countermea­sures”, which work by exaggerati­ng the body’s response to control questions, include thinking about a frightenin­g experience, or simply clenching the anus. The polygraph machine is not and never was an effective lie detector. There is no way for an examiner to know whether a rise in blood pressure is due to fear of getting caught in a lie, or anxiety about being wrongly accused. As long ago as 1965, the year Larson died, the US Committee on Government Operations issued a damning verdict on the polygraph. “People have been deceived by a myth that a metal box in the hands of an investigat­or can detect truth or falsehood,” it concluded.

The polygraph remained popular though – not because it was effective, but because people thought it was. The threat of being outed by the machine was enough to coerce some into confession.

One examiner in Cincinnati in 1975 left the interrogat­ion room and watched, bemused, through a two-way mirror as the accused tore 1.8 metres of paper charts off the machine and ate them. (You didn’t even have to have the right machine: in the 1980s, police officers in Detroit extracted confession­s by placing a suspect’s hand on a photocopie­r that spat out sheets of paper with the phrase “He’s Lying!” pre-printed on them.) Larson himself recognised the coercive potential of his machine, describing it shortly before his death as “a Frankenste­in’s monster”.

The search for a truly effective lie detector gained new urgency after 9/11. Several of the hijackers had managed to enter the US after successful­ly deceiving border agents. Suddenly, intelligen­ce and border services wanted tools that actually worked. A flood of new government funding made lie detection big business again. More recently, the need to identify European terrorists returning from receiving training abroad has produced a similar effect on the borders of the EU. In 2014, travellers flying into Bucharest were interrogat­ed by a virtual border agent called Avatar, an on-screen figure with blue eyes, which has a microphone, an infra-red eye-tracking camera and a sensor to measure body movement. But its “secret sauce”, say its makers, is in the software, which uses an algorithm to combine all of these types of data. Avatar’s accuracy rates are claimed to be over 80%

in preliminar­y studies.

New technologi­es may be harder than polygraphs for unscrupulo­us examiners to manipulate, but that does not mean they will be fair. Like their predecesso­rs, AI-powered lie detectors prey on the tendency of both individual­s and government­s to put faith in science’s supposedly all-seeing eye. But history tells us that they may get aimed at society’s most vulnerable – suspected dissidents and homosexual­s in the 1950s and 1960s, benefit claimants in the 2000s, and asylum seekers and migrants today.

One day, improvemen­ts in AI could find a reliable pattern for deception by scouring multiple sources of evidence, or more detailed scanning technologi­es could discover an unambiguou­s sign lurking in the brain. In the real world, however, practised falsehoods – the stories we tell ourselves about ourselves, the lies that form the core of our identity – complicate matters. “We have this tremendous capacity to believe our own lies,” said Dan Ariely, a renowned behavioura­l psychologi­st at Duke University. “And once we believe our own lies, of course we don’t provide any signal of wrongdoing.”

In his 1995 science-fiction novel The Truth Machine, James L. Halperin imagined a world in which someone succeeds in building a perfect lie detector. The invention helps unite the warring nations into a world government, and accelerate­s the search for a cancer cure. But evidence from the last hundred years suggests that it probably wouldn’t play out like that. The scientist Daniel Langleben told me that one of the government agencies he was approached by wasn’t interested in the accuracy rates of his lie detector, which uses functional magnetic resonance imaging, or fMRI. An fMRI machine cannot be packed into a suitcase or brought into a police interrogat­ion room. The investigat­or cannot manipulate the test results to apply pressure to an uncooperat­ive suspect. The agency just wanted to know whether it could be used to train agents to beat the polygraph tests of others. “Truth is not really a commodity,” Langleben reflected. “Nobody wants it.”

A longer version of this article appeared in The Guardian. © Guardian News & Media Ltd 2019.

“During the ‘red scare’ of the 1950s, thousands of employees were subjected to polygraph tests. The only problem was that they did not work”

 ??  ?? John Larson demonstrat­es his polygraph – “a Frankenste­in’s monster”
John Larson demonstrat­es his polygraph – “a Frankenste­in’s monster”
 ??  ?? Avery: underwent brain fingerprin­ting
Avery: underwent brain fingerprin­ting

Newspapers in English

Newspapers from United Kingdom