The National - News

Why employers are using AI to judge candidates on face value

- Rhodri Marsden

Nearly all of us have been guilty of making assumption­s about people based on the way they look. The idea that someone’s character somehow correspond­s with their facial features and expression­s is one that we find ourselves returning to again and again – when we’re at work, when we’re dating, even when we’re voting for politician­s.

But are those hunches of ours ever correct? And if there is a positive correlatio­n, could artificial intelligen­ce (AI) analyse videos and images of our faces to produce assessment­s of our emotions, our honesty or even how hard-working we might be?

That’s one of the ideas behind HireVue, an American company that is changing the way companies hire new workers. A video interview, or “pre-hire assessment”, is processed by an algorithm which, according to the firm, “augments human decision-making in the hiring process and delivers higher quality talent, faster”.

Or, if you prefer, a computer whittles down the list of candidates, and if it doesn’t think you’re up to it, you won’t appear on the shortlist. Firms such as Vodafone, Unilever and Hilton Hotels are among hundreds who use the system. Advocates say that it’s efficient and delivers brilliant results; opponents say that it’s inherently biased, rooted in dubious science and unaccounta­ble for the decisions it makes.

Attempts to link personalit­y with appearance by using scientific theory is as old as the hills. In Ancient Greece, Aristotle originated the concept of “Physiognom­ica”, claiming that it’s possible “to infer character from features”. Similar theories were used and taught across the world for centuries, and while they eventually came to be widely discredite­d, AI has brought about something of a resurgence. In 2016, researcher­s at Shanghai’s Jiao Tong University claimed to have invented a method of using machine learning to infer criminalit­y from facial images – or, in other words, they believed that they’d establishe­d a relationsh­ip between looking like a criminal and being one. It was widely criticised. Alexander Todorov, professor of psychology at Princeton University, said of people claiming the existence of a relationsh­ip between faces and character, that they “have not given much thought to their underlying assumption­s”. We tend to generalise. And when we generalise, we often get it wrong.

Unlike the Shanghai study, HireVue doesn’t assess mugshots. It analyses the smallest details of a prospectiv­e candidate’s interview tape and compares them to a database of 25,000 characteri­stics, from facial to linguistic. The speed they talk, their tone of voice, furrowed brow or nervous blink could all feed into their score. That this metric is being used to assess an employee’s worth has caused a level of disquiet, not least from people who have been rejected by the system.

Loren Larsen, the chief technology officer at HireVue, addressed this concern in an interview with the Washington

Post by comparing it to a traditiona­l job interview: “People are rejected all the time based on how they look,” he said. “Algorithms eliminate most of that in a way that hasn’t been possible before.” He went on to refer to the mysterious nature of human decision-making as “the ultimate black box”.

Many would agree that human recruiters can be prejudiced and liable to make biased choices. For starters, there is a well-establishe­d and long-evolved bias against “ugliness”, which assumes that attractive, personable people are simply better at everything. The question is whether a machine could be less biased, given that the data it learns from comes from a flawed and prejudiced society. Would it not merely reflect the biases of the system it’s replacing?

Oxford University’s Ivan Manokha believes so. “[If] AI is fed data of the candidates who were successful in the past, [then] companies are likely to hire the same types of people that they have always hired,” he writes. He also expresses concern that algorithms may “contribute to the … amplificat­ion of existing beliefs and biases. The solutions it provides are necessaril­y conservati­ve, leaving little room for innovation and social progress.”

Thus far, HireVue hasn’t allowed its system to be independen­tly audited, and as such there’s no real understand­ing of the assessment­s that are being made and why. The mystery of what makes a model HireVue candidate makes preparatio­n difficult, and declining to take the test may prevent you from being shortliste­d at all. Some US lawmakers are now attempting to force companies to reveal the criteria by which AI may be filtering job applicants, partly to help people understand how they’re being evaluated, but also to rule out the possibilit­y that prejudices are being hidden within another “black box”, but one that cannot answer to criticism.

HireVue is by no means the only firm using these kinds of systems, or indeed finding new applicatio­ns for them. Amazon offers a service called Rekognitio­n, which claims to assess facial emotion across eight categories: happy, sad, angry, surprised, disgusted, calm, confused and fearful. Across the “emotion detection” industry, new metrics are being devised to produce a wealth of data points. One firm, Faception, claims to use machine learning and image data to place people in categories such as “High IQ”, “Academic Researcher”, “Terrorist” or “Paedophile”. Here, there are faint echoes of the work of 19th-century academic Cesare Lombroso, who, after conducting autopsies, stated what he believed to be common physical characteri­stics of criminals: “Unusually short or tall height … wrinkles on forehead and face … beaked or flat nose … strong jaw line … weak chin.”

Lombroso’s links were weak and contradict­ory, but some academics are now criticisin­g AI for making similarly weak links – particular­ly between facial expression and emotion. Our expression­s can mean different things in different cultures and different contexts, they say; we’re also adept at hiding our feelings, or indeed exhibiting ones we didn’t intend. While human interviewe­rs may sense these awkwardnes­ses and make allowances for them, there is a fear that a computer cannot.

As Manhoka says: “Technology may lead to the rejection of talented and innovative people who simply do not fit the profile of those who smile at the right moment, or have the required tone of voice.” Human talent can come in many unconventi­onal forms. The challenge for machines is to appreciate and understand the mavericks among us.

 ?? Getty ?? Facial recognitio­n software analyses even the smallest aspects of an interview
Getty Facial recognitio­n software analyses even the smallest aspects of an interview

Newspapers in English

Newspapers from United Arab Emirates