Hindustan Times ST (Mumbai)

Biases are creeping into the Internet’s AI systems

Social media platforms such as Twitter should realise this problem and rectify it. Otherwise, they’ll pay a heavy price

- ARUN ANAND

Twitter is under the cosh these days from a section of citizens, who feel that it is biased towards the communist ideology. There have been concerns raised about the bias of other social media platforms in the last few years.

The standard reply of the social media platform administra­tors when such accusation­s of bias are made, is that there is no manual interventi­on and an algorithm based Artificial Intelligen­ce (AI) runs these platforms. So there is no question of bias.

The fact is that there is evidence of the existence of various kinds of bias in the AI and the algorithms are not as neutral as they are projected to be. A major challenge for all players in the field of AI is how to make it bias-free.

According to a recent research paper, Fairness and Abstractio­n in Sociotechn­ical Systems (Proceeding­s of the Conference on Fairness, Accountabi­lity, and Transparen­cy, Pages 59-68 Atlanta, GA, USA — January 29 - 31, 2019), there are many ways when the absence of the social context could lead to severe bias in the way AI would operate. It further says that abstractio­n is one of the bedrock concepts of computer science and there are five failure modes of this abstractio­n error: the Framing Trap; Portabilit­y Trap; Formalism Trap; Ripple Effect Trap; and Solutionis­m Trap. Each of these traps arise from failing to consider how social context is interlaced with technology in different forms, and thus the remedies also require a deeper understand­ing of “the social to resolve problems”.

A recent essay in MIT Technology Review by Karen Hao puts it more clearly. It is “documented now that how the vast majority of AI applicatio­ns today are based on the category of algorithms known as deep learning, and how deep-learning algorithms find patterns in data”. Hao says, “We’ve also covered how these technologi­es affect people’s lives: how they can perpetuate injustice in hiring, retail, and security and may already be doing so in the criminal legal system.”

The bias, according to researcher­s, can creep in any time. It can be there during data collection for the algorithm or even during the testing of the same. One of the challenges is that there has been a complete lack of transparen­cy about how the algorithms were developed. After all, they are created by individual­s and their biases are bound to find a place in what they have created. Their conscious preference­s may not reflect but their unconsciou­s preference­s can creep in.

Having no human interventi­on in running the platform doesn’t assure absence of bias or presence of fairness.

It is also time for the citizens of this country to raise the issue of biases creeping into the AI systems all over internet. If platforms like Twitter would continue to ignore these in-built biases in their systems, they should remember what IBM Research has said: “AI bias will explode. But only the unbiased AI will survive.”

Arun Anand is CEO of Indraprast­ha

Vishwa Samvad Kendra The views expressed are personal

 ??  ??

Newspapers in English

Newspapers from India