Daily News

You can’t reason with a biased AI algorithm

- PROFESSOR LOUIS FOURIE Professor Louis Fourie is a futurist and technology strategist.

OVER THE past few years, artificial intelligen­ce ( AI) has been incorporat­ed into more and more devices that are part of our daily life. AI has indeed become an indispensa­ble part of modern business.

The shady effects of AI are already present, often causing divisions among people and groups, inadverten­tly marginalis­ing certain people, ensnaring our attention, and enlarging the gap between the rich and the poor.

About three decades ago, when algorithms were mostly used by computer scientists, algorithmi­c bias was not a problem. But AI has since found its way into more sensitive areas, such as processing loan applicatio­ns, analysing interviews and making decisions about appointing employees, adaptive pricing, credit scoring, facial recognitio­n, health care and housing.

Many instances of algorithm bias have been discovered over the past few years. A few recent examples are:

◆ The risk assessment algorithm used by the US judicial system in the US to predict a defendant’s likelihood to becoming a recidivist, or to determine the bond amount during bail, was found to be biased, with a significan­t disparity in the handling of the assigned risk towards different races.

◆ Microsoft, IBM and Face++ developed facial detection systems that did not perform well with black female faces, due to the under- representa­tion of darker skin colours in the training data used to create the algorithms.

◆ Microsoft’s facial expression cloud service fared poorly in analysing of facial expression­s of children under a certain age due to shortcomin­gs in the data used to train the algorithms.

◆ The Google photo- organising algorithm grossly mislabelle­d the images of black people.

◆ The Apple credit card lending algorithm discrimina­ted against women and offered them less credit than men with similar income and circumstan­ces.

◆ An algorithm used by most healthcare systems in the US was found to be biased against black patients, making it less likely they would receive important medical treatment. The algorithm screens patients for “highrisk care management” interventi­on and relied on patient treatment cost data as a proxy for health. However, due to unequal access to health care, black patients spent less on treatments, which led to a racial bias against treatment for black patients.

All of the above problems were eventually fixed, or the algorithms were discontinu­ed, since algorithmi­c fairness is critical in the use of AI.

One of the problems with algorithmi­c bias is the severe limitation that you cannot reason with an algorithm. Once the untranspar­ent decision has been made by the algorithmi­c overlord, little can be done.

Biased data sources used for training of algorithm produce biased results in automated systems. Because AI systems learn to make decisions by looking at historical data, they often perpetuate biases.

Machine and deep learning are particular­ly susceptibl­e to bias. The aim of deep learning is to find patterns in the data that it is trained on. The data may reaffirm false stereotype­s, for instance, where men are associated with doctors and women with nurses, the algorithm will apply this bias to answer all future questions. In the field of medicine, such as the diagnosis of skin cancer or determinin­g the best drug treatment based on biological markers, these biases can mean the difference between life and death.

Companies and government agencies often introduce automated AI systems to cut costs and handle complex datasets, but some of the algorithms are opaque and unregulate­d and contain biases that were often unintentio­nally built into their code.

It is possible to fix the bias, with a new set of data and careful training of the algorithm, an improved neural network, or by changing the very thing that the algorithm is supposed to predict.

In the longer term, universiti­es will have to rethink their computer science programmes and at least accommodat­e ethics as a core part of the curriculum. An ethics course without current material or the latest technologi­cal thinking will not make sense.

 ?? Reuters ?? VISITORS check their phones behind a screen advertisin­g facial recognitio­n software during a conference in Beijing. Algorithm bias has resulted in some facial recognitio­n systems discrimina­ting against certain types of people, says the writer. |
Reuters VISITORS check their phones behind a screen advertisin­g facial recognitio­n software during a conference in Beijing. Algorithm bias has resulted in some facial recognitio­n systems discrimina­ting against certain types of people, says the writer. |

Newspapers in English

Newspapers from South Africa