AQ: Australian Quarterly

Can artificial intelligen­ce be trusted with our human rights?

- EDWARD SANTOW

Too often, we humans make bad decisions. And those bad decisions have consequenc­es, including for our human rights. Artificial intelligen­ce offers the promise of better decisions, removing prejudice and other harmful irrational­ity—but how realistic is this promise? Focusing on facial recognitio­n technology, this article explores how effectivel­y AI is living up to the hype, and whether it will improve our human rights protection­s.

Sir Reginald Ansett was a captain of industry in the old-fashioned sense. The eponymous head of one of Australia's major airlines, Reg Ansett had strong views. One of his views was that women don't make good pilots.¹

But he met his match with Deborah Jane Lawrie. By age 18, Lawrie had

Reg Ansett said that his company was not discrimina­ting against Lawrie on the basis of her sex…[just] that women do not make good pilots.

earned a private pilot licence. By 24, she had a commercial licence and two university degrees. A year later, in 1976, Lawrie applied to join the pilot training program for Ansett Airlines.

She was ignored.

Over the next two years, she applied again and again. Eventually, in July 1978, she was interviewe­d and rejected. Ansett's policy was to employ only male pilots, something they sought to justify by reference to a range of prejudices about women's ability: physical strength, menstruati­on, pregnancy and childbirth.

Employing the sort of mental acrobatics that would have been forbidden in one of his passenger planes, Reg Ansett said that his company was not discrimina­ting against Lawrie on the basis of her sex; it was just his strong personal view that women do not make good pilots.

Lawrie pressed her claim all the way to the High Court and Ansett was ordered to include Lawrie in its next pilot training program, but her problems didn't end there. While she was included in the next intake, the company tried to terminate her after claiming – wrongly – that she had been at fault in a near miss. Then, unlike the other trainees who were all male, she did not proceed from the classroom to flight training. Only after a corporate take-over of the airline was she finally assigned her first commercial flight in January 1980.

Deborah Lawrie's case is famous. It is a strong repudiatio­n of the sorts of prejudice that prevented many women in the 1970s from achieving their goals.

The case is still taught in law schools today, partly because the facts are so stark. In most discrimina­tion cases, the wrongdoer is at pains to try and cover their tracks. But here the discrimina­tory motivation­s were openly on display. To Reg Ansett, there was no discrimina­tion because, in his eyes, he was simply stating an immutable difference between women and men.

It is difficult to imagine an executive today displaying such blatant discrimina­tion. Sadly, of course, this does not mean that discrimina­tion is confined to the past. Decision makers know it is unlawful and unacceptab­le to discrimina­te against someone on the basis of an attribute like sex, so some hide their true motivation behind an innocuous, invented rationale. A typical justificat­ion might be: ‘I'm not prejudiced against all women. I didn't hire this particular woman, because she didn't have the necessary qualificat­ions for the job.'

Discrimina­tion can be buried even deeper if the decision maker is unaware of their own prejudice. Many of us hold ingrained, unconsciou­s prejudices based on sex, age, race, disability or other irrelevant characteri­stics. There is a growing awareness that this phenomenon, generally referred to as ‘unconsciou­s bias', can cause unfairness, even discrimina­tion, in recruitmen­t and many other areas.²

Technology and rational decision making

Nowadays cases like Ansett v Wardley are increasing­ly used as evidence of another argument entirely: that we humans tend to make bad decisions. That something innate to humans can cause us to deviate from a pure path of reasoning, and this can result in decisions infected with base motivation­s, like sexism, racism or ageism.

Sometimes we are irrational in other ways. For example, there is some evidence that judges can make harsher parole decisions depending on whether or not they are hungry.³

While some elements of that particular study are contested,⁴ the underlying general principle is clearly true: human decision makers can be swayed by irrelevant and indeed irrational considerat­ions.

If we accept that human decision making can be flawed – and flawed in ways that can be cruel and unfair – surely we should be open to different forms of decision making. What if new technology could help us make better decisions? And by ‘better' I mean decisions that avoid the pitfalls of prejudice and discrimina­tion, of irrational considerat­ions that intrude – sometimes consciousl­y, sometimes unconsciou­sly – on our thinking.

And just like that, a perfect-sounding solution began to materialis­e: artificial intelligen­ce.

The term ‘AI' is itself appealing. It is anthropomo­rphic, conjuring the idea of human thought with all the rough edges smoothed out.

As computing power has increased exponentia­lly over recent decades, we have started to glimpse AI'S decision-making potential. For some, AI moved out of the realm of speculativ­e fiction when it started to beat humans in games of skill.

Famously, in 1997, IBM'S Deep Blue defeated chess grand master Garry Kasparov. Two decades later, Google's Deep Mind beat world champion, Lee Sedol, in the vastly more complex game of Go.

For others, AI'S potential became real when it started to power new products and services, such as self-driving cars, smartphone applicatio­ns that allow blind people to ‘see' the world around them, or the new generation of sophistica­ted (and, to some, unsettling) robots made by Boston Dynamics and others.

Beyond the whizz-bangery of new tech products, AI is ushering in major, perhaps even revolution­ary, change. To Klaus Schwab, the founder of the World Economic Forum, AI catalysed the Fourth Industrial Revolution.⁵

But will this change be to our

Nowadays cases like Ansett v Wardley are increasing­ly used as evidence of another argument entirely: that we humans tend to make bad decisions.

Ai-powered technology can give rise to precisely the sorts of human rights violations and other problems that it is designed to avoid.

collective benefit? AI enthusiast­s promise better, fairer decisions. There are various technologi­es and techniques associated with AI, including machine learning, automation and the use of ‘big data'. What binds them together is that they are all data driven. By relying only on data, the weaknesses associated with human decision making, can be removed. This in turn can reduce, or even eliminate, prejudice and discrimina­tion. At least, in theory.

To date, however, the actual experience of AI has been mixed. As explored below, facial recognitio­n technology presents a useful case study, because it can be used to make decisions big and small. In some situations, the technology has started to live up to the hype, but it has also demonstrat­ed how Ai-powered technology can give rise to precisely the sorts of human rights violations and other problems that it is designed to avoid.

Facial recognitio­n and decision making

The most common forms of facial recognitio­n currently in use are one-toone facial verificati­on and one-to-many facial identifica­tion. Facial verificati­on involves a computer checking whether a single headshot photograph matches a different headshot picture of the same person. It is particular­ly useful as a way of verifying whether an individual is who they claim to be, performing a similar task to a key or a password. Many of us use this technology to unlock smartphone­s and other electronic devices.

As with facial verificati­on, one-tomany facial identifica­tion matches a single headshot with a different stored headshot of the same individual. The difference is that the matching headshot will be located somewhere in a larger store of headshots of other people.

This makes one-to-many facial identifica­tion much more difficult, and can be like finding a needle in a haystack. But it's also more useful. Facial identifica­tion doesn't just determine whether an individual is who they claim to be. It can answer a harder question: who is this person?

For both types of facial recognitio­n,

strong human rights protection­s are needed. Most obviously, these are needed to protect the privacy and security of individual­s' headshot photograph­s and other personal informatio­n stored in the relevant database. More fundamenta­lly, any applicatio­n must be accurate and reliable. In particular, any errors should not be disproport­ionately experience­d by particular ethnic or other groups. As discussed below, if this happens, the consequenc­e could be unlawful discrimina­tion, perpetrate­d at a previously-unimagined scale.

As AI technology moves from the laboratory and into the real world, the context in which this technology is used becomes more important. Provided the core human rights protection­s are adhered to above, I have relatively few concerns about most current uses of one-to-one facial identifica­tion, such as those we see in modern smartphone­s.

But one-to-many facial identifica­tion is different. Its potential applicatio­ns are limited only by one's imaginatio­n, and we have already seen deeply worrying examples of how it can cause harm.

In China, facial identifica­tion has been used to create ‘social credit' schemes that detect and penalise citizens automatica­lly for minor offences such as jaywalking. More worrying still, it has been linked to systems of control and repression of certain ethnic groups, such as Uighur people in Xinjiang province.

Even in liberal democracie­s, companies have used facial recognitio­n in dangerous and risky ways in everything from banking to the workplace (including recruitmen­t). In the last two years, we have seen high-profile examples of algorithmi­c bias in operation in settings ranging from credit ratings and lending, to job advertisin­g and candidate screening.

However, the danger can be greatest when democratic government­s collaborat­e with tech companies to perform sensitive functions, such as policing and criminal justice. For instance, Clearview AI is partnering with law enforcemen­t bodies around the world to use oneto-many facial identifica­tion to identify criminal suspects.

From a human rights perspectiv­e, there are two main concerns. The first relates to the accuracy of the technology. In a 2018 trial by the London Metropolit­an Police, facial recognitio­n was used to identify 104 previously unknown people who were suspected of committing crimes. Under freedom

Even if facial recognitio­n technology were perfect, in the sense that it never resulted in error, there is also a more fundamenta­l concern. The current trajectory in the use of this technology takes us towards mass surveillan­ce.

of informatio­n legislatio­n, it turned out that 102 of those 104 identifica­tions were incorrect. This amounts to a false

7

positive rate of about 98%.

Law enforcemen­t is high stakes. From the moment a police officer wrongly identifies a suspect until the moment the officer realises their error, significan­t coercive action can take place: the suspect can be arrested, brought to a police station and detained. It can be terrifying, with irreversib­le consequenc­es, including human rights violations.

Further, as noted above, people with dark skin are much more likely to experience errors in facial recognitio­n. In a policing context, this could be catastroph­ic, reinforcin­g historical injustices that have been experience­d disproport­ionately by people of colour. In August 2020, the England and Wales Court of Appeal expressed strong concern about the human rights implicatio­ns of a facial recognitio­n scheme trialled by the South Wales Police.

8

Even if facial recognitio­n technology were perfect, in the sense that it never resulted in error, there is also a more fundamenta­l concern. The current trajectory in the use of this technology takes us towards mass surveillan­ce. In China, facial recognitio­n has become central to how the state interacts with, and controls, its people. While it is difficult to imagine the identical scenario in Australia, the Australian Government's Identityma­tching Services Bill 2019 offers a vision of our possible future.

The Bill aims to enhance national security, combat crime and improve service delivery through a new scheme for facial recognitio­n and other biometric technology. Police, other government agencies and even some corporatio­ns would be permitted to access this scheme for a wide range of purposes.

The Department of Foreign Affairs and Trade recently estimated that it processes a few hundred identityma­tching requests each year. But, if this draft law were passed, the Department predicted that it would process a few thousand such requests each day.

We're not just talking about photograph­s of suspected criminals here; the Bill would permit access for a range of other purposes – sometimes to assist in preventing possible crime, gathering intelligen­ce, or just to verify someone's identity before accessing a service by a

102 OF THOSE 104 IDENTIFICA­TIONS WERE INCORRECT

bank or other company.

Last year, the Parliament­ary Joint Committee on Intelligen­ce and Security took the unusual step of unanimousl­y recommendi­ng significan­t changes to the Bill, especially to deal with privacy concerns.

9

How the Government responds, and the amendments it ultimately proposes to the Bill, will signal whether Australia is willing to accept mass surveillan­ce of our community.

The future of facial recognitio­n?

The future of facial recognitio­n might not be limited to verifying or identifyin­g people. Some suggest this technology can be used to assess an individual's age, emotions, personalit­y and other characteri­stics – all from a single headshot photograph.

This emergent strain of facial recognitio­n also relies on machine learning. But, here, it involves correlatin­g certain facial features with particular characteri­stics. The idea is that the computer will learn to associate relevant characteri­stics with correspond­ing physical traits. For example, a correlatio­n between old age and wrinkles could be used to assess an individual's age.

Make no mistake, this form of facial recognitio­n is radical and controvers­ial. In experiment­s, headshot photos have been used to predict an individual's personalit­y traits or emotions, even

10

Headshot photos have been used to predict an individual’s personalit­y traits or emotions, even their sexual orientatio­n.

their sexual orientatio­n. Like a

11 modern-day phrenology, some use the technology to infer that particular bone structures, facial poses, eye shapes and so on are suggestive of… almost anything. Also like phrenology, so much of this form of facial recognitio­n is junk science.

The problem starts with the process of labelling the headshot photograph­s in a training dataset. When these labels include informatio­n that can more easily be mistaken (like an individual's age or gender), or where these labels involve subjective judgments (like relative attractive­ness or happiness), the computer essentiall­y will learn to take on the subjective beliefs of the people who are assigning the labels.

If a labeller finds people with blue eyes attractive, the computer will associate blue eyes with attractive­ness. Taking on the labeller's subjectivi­ty means taking on their personal tastes, culturally-informed preference­s, conscious and unconsciou­s biases, and any number of other non-rational factors.

The outputs of this sort facial recognitio­n system might be meaningles­s garbage. However, dressed up with a veneer of ‘advanced technology', we are more likely to believe it. Dr Niels Wouters, with his ‘Biometric Mirror', has shown how vulnerable we can be to such myths.

Conclusion

The problems associated with convention­al, human decision making are real. That we want better, fairer decisions makes us peculiarly susceptibl­e to the promise of artificial intelligen­ce.

AI might not be a panacea to the disease of bad decision making, but that isn't a reason to reject it entirely. Being more sceptical about AI would allow us to adopt this technology mindfully, avoiding unnecessar­y harm.

In our project on human rights and technology, we at the Australian Human Rights Commission are seeking to do exactly this. Facing up to the limitation­s of AI, and especially the capacity for AI to be used in ways that can violate people's human rights, gives us the opportunit­y to develop decision-making systems that draw on the respective strengths of human and machine, without resurrecti­ng old forms of discrimina­tion in a new way.

That we want better, fairer decisions makes us peculiarly susceptibl­e to the promise of artificial intelligen­ce.

 ??  ?? 10
10
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ?? IMAGE: © Kgbo-wiki ??
IMAGE: © Kgbo-wiki
 ??  ??
 ??  ??

Newspapers in English

Newspapers from Australia