The Star Malaysia - Star2

When algorithms have bigoted world views

- By ANNE POLLMANN

TECHNOLOGY is supposed to help humans be more productive, and algorithms are taking all kinds of tasks out of our hands. But when algorithms go wrong, it can be a real horror story.

Like when an algorithm to help Amazon’s hiring process suggested only male applicants. Or the times when Google’s image recognitio­n software kept mixing up black people with Gorillas and telling Asian people to open their eyes.

So what’s up with that? Can algorithms be prejudiced?

Lorena Jaume-palasi, founder of the Ethical Tech Society in Berlin, says it’s more complicate­d than that. “People are always the reason for discrimina­tion,” she tells dpa.

“Instead of trying to regulate the reasons discrimina­tion exists, we are focusing on the technology, which just mirrors discrimina­tory practices,” she says.

Algorithms are instructio­ns on how to solve a particular problem. They tell the machine: This is how to do this thing. Artificial intelligen­ce (AI) is based on algorithms.

AI copies intelligen­t actions, and the machine is instructed to make informed decisions. In order for it to do that successful­ly, it needs large amounts of data, which it can use to recognise patterns and make decisions based on those patterns.

This is one explanatio­n for why algorithms can turn out so nasty: Often, they are making decisions based on old data.

“In the past, companies did have employment practices that favoured white men,” says Susanne Dehmel from Bitkom. If you train an algorithm using this historic data, it will choose candidates that fit that bill.

When it comes to racist photo recognitio­n software, it is also very likely that it was not the algorithm’s fault – instead, the choice of images used to train the machine may have been problemati­c in the first place.

Now, there is a positive side to all this: The machines are holding a mirror up to human society, and showing us a pretty ugly picture. Clearly, discrimina­tion is a big problem.

One solution is for tech companies to take more of an active role in what algorithms spit out, and correct behaviours when needed.

This has already been done. For example, when US professor Safiya Umoja Noble published her book Algorithms Of Oppression, in which she criticised the fact that Google’s search results for the term “black girls” were extremely racist and sexist, the tech giant decided to make some changes.

We need to ask how we can ensure that AI technologi­es make better and fairer decisions in the future. Dehmel says there needn’t be any government regulation.

“It is a competency problem. When you understand how the technology works, then you can counter discrimina­tion carefully,” she says.

Past examples have already shown that it isn’t enough to just take out informatio­n about gender and race – the algorithms were still able to make discrimina­tory connection­s and produced the same results. Instead, Dehmel suggests developers create diverse data sets, and conduct careful trials before training the machines.

Jaume-palasi believes continuous checks on algorithmi­cally based systems are necessary, and AI should be created by more than just a developer and a data scientist.

“You need sociologis­ts, anthropolo­gists, ethnologis­ts, political scientists. People who are better at contextual­ising the results that are being used across various sectors,” she says.

We need to move away from the notion that AI is a mathematic­al or technologi­cal issue. These are socio-technologi­cal systems, and the job profiles we need in this field need to be more diverse.” – dpa

 ?? — dpa ?? The machines are holding a mirror up to human society, and showing us a pretty ugly picture.
— dpa The machines are holding a mirror up to human society, and showing us a pretty ugly picture.

Newspapers in English

Newspapers from Malaysia