You can’t reason with a biased AI algorithm
OVER THE past few years, artificial intelligence ( AI) has been incorporated into more and more devices that are part of our daily life. AI has indeed become an indispensable part of modern business.
The shady effects of AI are already present, often causing divisions among people and groups, inadvertently marginalising certain people, ensnaring our attention, and enlarging the gap between the rich and the poor.
About three decades ago, when algorithms were mostly used by computer scientists, algorithmic bias was not a problem. But AI has since found its way into more sensitive areas, such as processing loan applications, analysing interviews and making decisions about appointing employees, adaptive pricing, credit scoring, facial recognition, health care and housing.
Many instances of algorithm bias have been discovered over the past few years. A few recent examples are:
◆ The risk assessment algorithm used by the US judicial system in the US to predict a defendant’s likelihood to becoming a recidivist, or to determine the bond amount during bail, was found to be biased, with a significant disparity in the handling of the assigned risk towards different races.
◆ Microsoft, IBM and Face++ developed facial detection systems that did not perform well with black female faces, due to the under- representation of darker skin colours in the training data used to create the algorithms.
◆ Microsoft’s facial expression cloud service fared poorly in analysing of facial expressions of children under a certain age due to shortcomings in the data used to train the algorithms.
◆ The Google photo- organising algorithm grossly mislabelled the images of black people.
◆ The Apple credit card lending algorithm discriminated against women and offered them less credit than men with similar income and circumstances.
◆ An algorithm used by most healthcare systems in the US was found to be biased against black patients, making it less likely they would receive important medical treatment. The algorithm screens patients for “highrisk care management” intervention and relied on patient treatment cost data as a proxy for health. However, due to unequal access to health care, black patients spent less on treatments, which led to a racial bias against treatment for black patients.
All of the above problems were eventually fixed, or the algorithms were discontinued, since algorithmic fairness is critical in the use of AI.
One of the problems with algorithmic bias is the severe limitation that you cannot reason with an algorithm. Once the untransparent decision has been made by the algorithmic overlord, little can be done.
Biased data sources used for training of algorithm produce biased results in automated systems. Because AI systems learn to make decisions by looking at historical data, they often perpetuate biases.
Machine and deep learning are particularly susceptible to bias. The aim of deep learning is to find patterns in the data that it is trained on. The data may reaffirm false stereotypes, for instance, where men are associated with doctors and women with nurses, the algorithm will apply this bias to answer all future questions. In the field of medicine, such as the diagnosis of skin cancer or determining the best drug treatment based on biological markers, these biases can mean the difference between life and death.
Companies and government agencies often introduce automated AI systems to cut costs and handle complex datasets, but some of the algorithms are opaque and unregulated and contain biases that were often unintentionally built into their code.
It is possible to fix the bias, with a new set of data and careful training of the algorithm, an improved neural network, or by changing the very thing that the algorithm is supposed to predict.
In the longer term, universities will have to rethink their computer science programmes and at least accommodate ethics as a core part of the curriculum. An ethics course without current material or the latest technological thinking will not make sense.