Biases are creeping into the Internet’s AI systems
Social media platforms such as Twitter should realise this problem and rectify it. Otherwise, they’ll pay a heavy price
Twitter is under the cosh these days from a section of citizens, who feel that it is biased towards the communist ideology. There have been concerns raised about the bias of other social media platforms in the last few years.
The standard reply of the social media platform administrators when such accusations of bias are made, is that there is no manual intervention and an algorithm based Artificial Intelligence (AI) runs these platforms. So there is no question of bias.
The fact is that there is evidence of the existence of various kinds of bias in the AI and the algorithms are not as neutral as they are projected to be. A major challenge for all players in the field of AI is how to make it bias-free.
According to a recent research paper, Fairness and Abstraction in Sociotechnical Systems (Proceedings of the Conference on Fairness, Accountability, and Transparency, Pages 59-68 Atlanta, GA, USA — January 29 31, 2019), there are many ways when the absence of the social context could lead to severe bias in the way AI would operate. It further says that abstraction is one of the bedrock concepts of computer science and there are five failure modes of this abstraction error: the Framing Trap; Portability Trap; Formalism Trap; Ripple Effect Trap; and Solutionism Trap. Each of these traps arise from failing to consider how social context is interlaced with technology in different forms, and thus the remedies also require a deeper understanding of “the social to resolve problems”.
A recent essay in MIT Technology Review by Karen Hao puts it more clearly. It is “documented now that how the vast majority of AI applications today are based on the category of algorithms known as deep learning, and how deep-learning algorithms find patterns in data”. Hao says, “We’ve also covered how these technologies affect people’s lives: how they can perpetuate injustice in hiring, retail, and security and may already be doing so in the criminal legal system.”
The bias, according to researchers, can creep in any time. It can be there during data collection for the algorithm or even during the testing of the same. One of the challenges is that there has been a complete lack of transparency about how the algorithms were developed. After all, they are created by individuals and their biases are bound to find a place in what they have created. Their conscious preferences may not reflect but their unconscious preferences can creep in.
Having no human intervention in running the platform doesn’t assure absence of bias or presence of fairness.
It is also time for the citizens of this country to raise the issue of biases creeping into the AI systems all over internet. If platforms like Twitter would continue to ignore these in-built biases in their systems, they should remember what IBM Research has said: “AI bias will explode. But only the unbiased AI will survive.”