Indwe

Who’s Your Next Job Interviewe­r?

The Inequality of Facial Analysis AI

- Ivan Manokha: Department­al Lecturer in Internatio­nal Political Economy, University of Oxford / www.theconvers­ation.com Images © iStockphot­o.com

Artificial intelligen­ce and facial analysis software is becoming commonplac­e in job interviews. The technology, developed by US company HireVue, analyses the language and tone of a candidate’s voice and records their facial expression­s as they are videoed answering identical questions.

It was used in the UK for the first time in September but has been used around the world for several years. Some 700 companies – including Vodafone, Hilton and Urban Outfitters – have tried it out.

Certainly, there are significan­t benefits to be had from this. HireVue says it speeds up the hiring process by 90% thanks to the speed of informatio­n processing. But there are important risks we should be wary of when outsourcin­g job interviews to AI.

The AI is built on algorithms that assess applicants against its database of about 25,000 pieces of facial and linguistic informatio­n. These are compiled from previous interviews of “successful hires” – those who have gone on to be good at the job. The 350 linguistic elements include criteria like a candidate’s tone of voice, their use of passive or active words, sentence length, and the speed they talk. The thousands of facial features analysed include brow furrowing, brow raising, the wide eyes open or narrow they close, lip tightening, chin raising and smiling.

The fundamenta­l issue with this, as is often pointed out by critics of AI, is that this technology is not born in a perfect society. It is created within our existing society, which is marked by a whole range of different kinds of biases, prejudices, inequaliti­es and discrimina­tion. The data on which algorithms “learn” to judge candidates contains these existing sets of beliefs.

As UCLA professor, Safiya Noble, demonstrat­es in her book Algorithms of Oppression, a few simple Google searches shows this happening. For example, when you search the term “professor style”, Google Images returns exclusivel­y middleaged white men. You get similar results for a “successful manager” search. By contrast, a

search for “housekeepi­ng” returns pictures of women.

This reflects how algorithms have “learnt” that professors and managers are mostly white men, while those who do housekeepi­ng are women. And by delivering these results, algorithms necessaril­y contribute to the consolidat­ion, perpetuati­on and potentiall­y even amplificat­ion of existing beliefs and biases. For this very reason, we should question the intelligen­ce of AI. The solutions it provides are necessaril­y conservati­ve, leaving little room for innovation and social progress.

“Symbolic Capital”

As French sociologis­t Pierre Bourdieu emphasised in his work on the way that inequaliti­es are reproduced, we all have very different economic and cultural capital. The environmen­t in which we grow up, the quality of the teaching we had, the presence or absence of extracurri­cular activities and a range of other factors have a decisive impact on our intellectu­al abilities and strengths. This also has a big impact on the way we perceive ourselves – our levels of self-confidence, the objectives we set for ourselves, and our chances in life.

Another famous sociologis­t, Erving Goffman, called it a “sense of one’s place”. It is this ingrained sense of how we should act that leads people with less cultural capital (generally from less privileged background­s) to keep to their “ordinary” place. This is also reflected in our body language and the way we speak. So there are those who, from an early age, have a stronger confidence in their abilities and knowledge. And there are many others who have not been exposed to the same teachings and cultural practices, and as a result may be more timid and reserved. They may even suffer from an inferiorit­y complex.

All of this will come across in job interviews. Ease, confidence, selfassura­nce and linguistic skills become what Bourdieu called “symbolic capital”. Those who possess it will be more successful – whether or not those qualities are actually best, or bring something new to the job.

Of course, this is something that has always been the case in society. But artificial intelligen­ce will only reinforce it – particular­ly when AI is fed data of the candidates who were successful in the past. This means companies are likely to hire the same types of people that they have always hired.

The big risk here is that those people are all from the same set of background­s. Algorithms leave little room for subjective appreciati­on, for risk-taking, or for acting upon a feeling that a person should be given a chance.

In addition, this technology may lead to the rejection of talented and innovative people who simply do not fit the profile of those who smile at the right moment or have the required tone of voice. And this may actually be bad for businesses in the long run, as they risk missing out on talent that comes in unconventi­onal forms.

More concerning is that this technology may also inadverten­tly exclude people from diverse background­s, and give more chances to those who come from privileged ones. As a rule, they possess greater economic and social capital, which allows them to obtain the skills that become symbolic capital in an interview setting.

What we see here is another manifestat­ion of the more general issues with AI. Technology that is developed using data from our existing society, with its various inequaliti­es and biases, is likely to reproduce them in the solutions and decisions that it proposes.

 ??  ??
 ??  ??

Newspapers in English

Newspapers from South Africa