Can artificial intelligence be trusted with our human rights?
Too often, we humans make bad decisions. And those bad decisions have consequences, including for our human rights. Artificial intelligence offers the promise of better decisions, removing prejudice and other harmful irrationality—but how realistic is this promise? Focusing on facial recognition technology, this article explores how effectively AI is living up to the hype, and whether it will improve our human rights protections.
Sir Reginald Ansett was a captain of industry in the old-fashioned sense. The eponymous head of one of Australia's major airlines, Reg Ansett had strong views. One of his views was that women don't make good pilots.¹
But he met his match with Deborah Jane Lawrie. By age 18, Lawrie had
Reg Ansett said that his company was not discriminating against Lawrie on the basis of her sex…[just] that women do not make good pilots.
earned a private pilot licence. By 24, she had a commercial licence and two university degrees. A year later, in 1976, Lawrie applied to join the pilot training program for Ansett Airlines.
She was ignored.
Over the next two years, she applied again and again. Eventually, in July 1978, she was interviewed and rejected. Ansett's policy was to employ only male pilots, something they sought to justify by reference to a range of prejudices about women's ability: physical strength, menstruation, pregnancy and childbirth.
Employing the sort of mental acrobatics that would have been forbidden in one of his passenger planes, Reg Ansett said that his company was not discriminating against Lawrie on the basis of her sex; it was just his strong personal view that women do not make good pilots.
Lawrie pressed her claim all the way to the High Court and Ansett was ordered to include Lawrie in its next pilot training program, but her problems didn't end there. While she was included in the next intake, the company tried to terminate her after claiming – wrongly – that she had been at fault in a near miss. Then, unlike the other trainees who were all male, she did not proceed from the classroom to flight training. Only after a corporate take-over of the airline was she finally assigned her first commercial flight in January 1980.
Deborah Lawrie's case is famous. It is a strong repudiation of the sorts of prejudice that prevented many women in the 1970s from achieving their goals.
The case is still taught in law schools today, partly because the facts are so stark. In most discrimination cases, the wrongdoer is at pains to try and cover their tracks. But here the discriminatory motivations were openly on display. To Reg Ansett, there was no discrimination because, in his eyes, he was simply stating an immutable difference between women and men.
It is difficult to imagine an executive today displaying such blatant discrimination. Sadly, of course, this does not mean that discrimination is confined to the past. Decision makers know it is unlawful and unacceptable to discriminate against someone on the basis of an attribute like sex, so some hide their true motivation behind an innocuous, invented rationale. A typical justification might be: ‘I'm not prejudiced against all women. I didn't hire this particular woman, because she didn't have the necessary qualifications for the job.'
Discrimination can be buried even deeper if the decision maker is unaware of their own prejudice. Many of us hold ingrained, unconscious prejudices based on sex, age, race, disability or other irrelevant characteristics. There is a growing awareness that this phenomenon, generally referred to as ‘unconscious bias', can cause unfairness, even discrimination, in recruitment and many other areas.²
Technology and rational decision making
Nowadays cases like Ansett v Wardley are increasingly used as evidence of another argument entirely: that we humans tend to make bad decisions. That something innate to humans can cause us to deviate from a pure path of reasoning, and this can result in decisions infected with base motivations, like sexism, racism or ageism.
Sometimes we are irrational in other ways. For example, there is some evidence that judges can make harsher parole decisions depending on whether or not they are hungry.³
While some elements of that particular study are contested,⁴ the underlying general principle is clearly true: human decision makers can be swayed by irrelevant and indeed irrational considerations.
If we accept that human decision making can be flawed – and flawed in ways that can be cruel and unfair – surely we should be open to different forms of decision making. What if new technology could help us make better decisions? And by ‘better' I mean decisions that avoid the pitfalls of prejudice and discrimination, of irrational considerations that intrude – sometimes consciously, sometimes unconsciously – on our thinking.
And just like that, a perfect-sounding solution began to materialise: artificial intelligence.
The term ‘AI' is itself appealing. It is anthropomorphic, conjuring the idea of human thought with all the rough edges smoothed out.
As computing power has increased exponentially over recent decades, we have started to glimpse AI'S decision-making potential. For some, AI moved out of the realm of speculative fiction when it started to beat humans in games of skill.
Famously, in 1997, IBM'S Deep Blue defeated chess grand master Garry Kasparov. Two decades later, Google's Deep Mind beat world champion, Lee Sedol, in the vastly more complex game of Go.
For others, AI'S potential became real when it started to power new products and services, such as self-driving cars, smartphone applications that allow blind people to ‘see' the world around them, or the new generation of sophisticated (and, to some, unsettling) robots made by Boston Dynamics and others.
Beyond the whizz-bangery of new tech products, AI is ushering in major, perhaps even revolutionary, change. To Klaus Schwab, the founder of the World Economic Forum, AI catalysed the Fourth Industrial Revolution.⁵
But will this change be to our
Nowadays cases like Ansett v Wardley are increasingly used as evidence of another argument entirely: that we humans tend to make bad decisions.
Ai-powered technology can give rise to precisely the sorts of human rights violations and other problems that it is designed to avoid.
collective benefit? AI enthusiasts promise better, fairer decisions. There are various technologies and techniques associated with AI, including machine learning, automation and the use of ‘big data'. What binds them together is that they are all data driven. By relying only on data, the weaknesses associated with human decision making, can be removed. This in turn can reduce, or even eliminate, prejudice and discrimination. At least, in theory.
To date, however, the actual experience of AI has been mixed. As explored below, facial recognition technology presents a useful case study, because it can be used to make decisions big and small. In some situations, the technology has started to live up to the hype, but it has also demonstrated how Ai-powered technology can give rise to precisely the sorts of human rights violations and other problems that it is designed to avoid.
Facial recognition and decision making
The most common forms of facial recognition currently in use are one-toone facial verification and one-to-many facial identification. Facial verification involves a computer checking whether a single headshot photograph matches a different headshot picture of the same person. It is particularly useful as a way of verifying whether an individual is who they claim to be, performing a similar task to a key or a password. Many of us use this technology to unlock smartphones and other electronic devices.
As with facial verification, one-tomany facial identification matches a single headshot with a different stored headshot of the same individual. The difference is that the matching headshot will be located somewhere in a larger store of headshots of other people.
This makes one-to-many facial identification much more difficult, and can be like finding a needle in a haystack. But it's also more useful. Facial identification doesn't just determine whether an individual is who they claim to be. It can answer a harder question: who is this person?
For both types of facial recognition,
strong human rights protections are needed. Most obviously, these are needed to protect the privacy and security of individuals' headshot photographs and other personal information stored in the relevant database. More fundamentally, any application must be accurate and reliable. In particular, any errors should not be disproportionately experienced by particular ethnic or other groups. As discussed below, if this happens, the consequence could be unlawful discrimination, perpetrated at a previously-unimagined scale.
As AI technology moves from the laboratory and into the real world, the context in which this technology is used becomes more important. Provided the core human rights protections are adhered to above, I have relatively few concerns about most current uses of one-to-one facial identification, such as those we see in modern smartphones.
But one-to-many facial identification is different. Its potential applications are limited only by one's imagination, and we have already seen deeply worrying examples of how it can cause harm.
In China, facial identification has been used to create ‘social credit' schemes that detect and penalise citizens automatically for minor offences such as jaywalking. More worrying still, it has been linked to systems of control and repression of certain ethnic groups, such as Uighur people in Xinjiang province.
Even in liberal democracies, companies have used facial recognition in dangerous and risky ways in everything from banking to the workplace (including recruitment). In the last two years, we have seen high-profile examples of algorithmic bias in operation in settings ranging from credit ratings and lending, to job advertising and candidate screening.
However, the danger can be greatest when democratic governments collaborate with tech companies to perform sensitive functions, such as policing and criminal justice. For instance, Clearview AI is partnering with law enforcement bodies around the world to use oneto-many facial identification to identify criminal suspects.
From a human rights perspective, there are two main concerns. The first relates to the accuracy of the technology. In a 2018 trial by the London Metropolitan Police, facial recognition was used to identify 104 previously unknown people who were suspected of committing crimes. Under freedom
Even if facial recognition technology were perfect, in the sense that it never resulted in error, there is also a more fundamental concern. The current trajectory in the use of this technology takes us towards mass surveillance.
of information legislation, it turned out that 102 of those 104 identifications were incorrect. This amounts to a false
7
positive rate of about 98%.
Law enforcement is high stakes. From the moment a police officer wrongly identifies a suspect until the moment the officer realises their error, significant coercive action can take place: the suspect can be arrested, brought to a police station and detained. It can be terrifying, with irreversible consequences, including human rights violations.
Further, as noted above, people with dark skin are much more likely to experience errors in facial recognition. In a policing context, this could be catastrophic, reinforcing historical injustices that have been experienced disproportionately by people of colour. In August 2020, the England and Wales Court of Appeal expressed strong concern about the human rights implications of a facial recognition scheme trialled by the South Wales Police.
8
Even if facial recognition technology were perfect, in the sense that it never resulted in error, there is also a more fundamental concern. The current trajectory in the use of this technology takes us towards mass surveillance. In China, facial recognition has become central to how the state interacts with, and controls, its people. While it is difficult to imagine the identical scenario in Australia, the Australian Government's Identitymatching Services Bill 2019 offers a vision of our possible future.
The Bill aims to enhance national security, combat crime and improve service delivery through a new scheme for facial recognition and other biometric technology. Police, other government agencies and even some corporations would be permitted to access this scheme for a wide range of purposes.
The Department of Foreign Affairs and Trade recently estimated that it processes a few hundred identitymatching requests each year. But, if this draft law were passed, the Department predicted that it would process a few thousand such requests each day.
We're not just talking about photographs of suspected criminals here; the Bill would permit access for a range of other purposes – sometimes to assist in preventing possible crime, gathering intelligence, or just to verify someone's identity before accessing a service by a
102 OF THOSE 104 IDENTIFICATIONS WERE INCORRECT
bank or other company.
Last year, the Parliamentary Joint Committee on Intelligence and Security took the unusual step of unanimously recommending significant changes to the Bill, especially to deal with privacy concerns.
9
How the Government responds, and the amendments it ultimately proposes to the Bill, will signal whether Australia is willing to accept mass surveillance of our community.
The future of facial recognition?
The future of facial recognition might not be limited to verifying or identifying people. Some suggest this technology can be used to assess an individual's age, emotions, personality and other characteristics – all from a single headshot photograph.
This emergent strain of facial recognition also relies on machine learning. But, here, it involves correlating certain facial features with particular characteristics. The idea is that the computer will learn to associate relevant characteristics with corresponding physical traits. For example, a correlation between old age and wrinkles could be used to assess an individual's age.
Make no mistake, this form of facial recognition is radical and controversial. In experiments, headshot photos have been used to predict an individual's personality traits or emotions, even
10
Headshot photos have been used to predict an individual’s personality traits or emotions, even their sexual orientation.
their sexual orientation. Like a
11 modern-day phrenology, some use the technology to infer that particular bone structures, facial poses, eye shapes and so on are suggestive of… almost anything. Also like phrenology, so much of this form of facial recognition is junk science.
The problem starts with the process of labelling the headshot photographs in a training dataset. When these labels include information that can more easily be mistaken (like an individual's age or gender), or where these labels involve subjective judgments (like relative attractiveness or happiness), the computer essentially will learn to take on the subjective beliefs of the people who are assigning the labels.
If a labeller finds people with blue eyes attractive, the computer will associate blue eyes with attractiveness. Taking on the labeller's subjectivity means taking on their personal tastes, culturally-informed preferences, conscious and unconscious biases, and any number of other non-rational factors.
The outputs of this sort facial recognition system might be meaningless garbage. However, dressed up with a veneer of ‘advanced technology', we are more likely to believe it. Dr Niels Wouters, with his ‘Biometric Mirror', has shown how vulnerable we can be to such myths.
Conclusion
The problems associated with conventional, human decision making are real. That we want better, fairer decisions makes us peculiarly susceptible to the promise of artificial intelligence.
AI might not be a panacea to the disease of bad decision making, but that isn't a reason to reject it entirely. Being more sceptical about AI would allow us to adopt this technology mindfully, avoiding unnecessary harm.
In our project on human rights and technology, we at the Australian Human Rights Commission are seeking to do exactly this. Facing up to the limitations of AI, and especially the capacity for AI to be used in ways that can violate people's human rights, gives us the opportunity to develop decision-making systems that draw on the respective strengths of human and machine, without resurrecting old forms of discrimination in a new way.
That we want better, fairer decisions makes us peculiarly susceptible to the promise of artificial intelligence.