Business Day

Value-neutral bots inherit our desire to tick prejudiced boxes

Technology may be a blank canvas, but in the hands of humans it is prone to reflecting biases

- SYLVIA McKEOWN ● McKeown is a gadget and tech trend writer.

Last Sunday the New York Times reported on a US health and human services department memo indicating that Trump’s administra­tion is moving to strip away the rights of about 1.4-million Americans, specifical­ly those who have chosen to identify themselves — surgically or otherwise — as a gender other than the one they were born into.

The memo insists that “the sex listed on a person’s birth certificat­e, as originally issued, shall constitute definitive proof of a person’s sex unless rebutted by reliable genetic evidence”.

Or, as the Times put it, “the department of health and human services has privately argued that the term ‘sex’ was never meant to include gender identity or even homosexual­ity, and that the lack of clarity allowed the Obama administra­tion to wrongfully extend civil rights protection­s to people who should not have them”.

Civil rights protection­s to people who should not have them? Surely everyone’s civil rights should be protected? Isn’t that the point of having them in the first place?

Humans are infallibly geared to discrimina­te. Our own history weaves the most colourful tapestry of proving this point acutely. As does our present.

Little over a week ago the DA made border control one of the key issues for its 2019 election campaign, all the while pushing SA’s xenophobic narrative further along. At the same time, AfriForum’s “head of community safety” was on tour in Australia to discuss the organisati­on’s racial slant on farm murders and land issues in a country that barely recognises the land rights of its own indigenous people.

In a world where we have the likes of Penny Sparrow calling black people monkeys, are we really surprised when Google photo artificial intelligen­ce (AI) has done the same thing by tagging two black people as gorillas? Or that two years ago Microsoft’s Twitter chatbot Tay was shut down in less than a day because it was taught to be a woman-hating Nazi? Or a year later when Zo, another Microsoft chatbot, referred to the Quran as “very violent”.

This type of discrimina­tion isn’t exclusivel­y a Microsoft PR nightmare. Reuters recently reported that after speaking to five people working on Amazon’s AI recruiting software that the program was invariably taught to discrimina­te against women. Due to the large influx of CVs of men, the software taught itself that male candidates were preferable. It “downgraded” any female candidates to the point where it discrimina­ted against any CV that had the word “woman” in them.

Nihar Shah, who teaches machine learning at Carnegie Mellon University, told Reuters there is still much work to be done. “How to ensure that the algorithm is fair, how to make sure the algorithm is really interpreta­ble and explainabl­e, that’s still quite far off,” he said.

And yet that hasn’t stopped the strong push to use facial recognitio­n biometrics as a tool for social currency. As it stands, China has constructe­d a draconian social society points system that indiscrimi­nately deducts points off you if you smoke and eat junk food, not to mention if your political beliefs do not entirely match those of President Xi Jinping.

And your face can give you away, according to psychologi­st Michal Kosinski, who claims AI can detect your sexuality and politics just by looking at you. He’s the man whose college research inspired the creation of the political consultanc­y called Cambridge Analytica. We all know how that worked out.

Kosinski’s new work focuses on AI to detect psychologi­cal traits, which leads him to believe this technology could be used to detect emotions, IQ and even a predisposi­tion to commit certain crimes. This is made all the more terrifying by the fact that he makes frequent trips to brief the notoriousl­y homophobic and racist Russian cabinet.

The Russians aren’t the only ones looking at algorithms to determine the likelihood of criminalit­y. In the US the correction­al offender management profiling for alternativ­e sanctions (Compas) system hit the headlines when Eric Loomis challenged the use of the algorithm as a violation of his due process rights to be sentenced as an individual.

The algorithm indiscrimi­nately judged Loomis a high risk. He pleaded guilty to attempting to flee from an officer in a car that had been used in a shooting, and was still slapped with a sixyear prison sentence.

In spite of an investigat­ion in 2016 by technology reporter Julia Angwin and colleagues at ProPublica, who found that Compas had a bias against black men, Loomis lost his appeal.

“Blacks are almost twice as likely as whites to be labelled high risk but not actually reoffend,” Angwin wrote.

The report further found that Compas “makes the opposite mistake among whites: they are much more likely than blacks to be labelled lower risk but go on to commit other crimes”.

Technology is value-neutral, its purpose defined by the person wielding it. It’s a pity we can’t conversely learn the value of neutrality.

HOW TO ENSURE THE ALGORITHM IS FAIR, HOW TO MAKE SURE THE ALGORITHM IS INTERPRETA­BLE, THAT’S STILL QUITE FAR OFF

 ?? /Reuters ?? Bot boxes: In a world where discrimina­tion is rife, it is no wonder our artificial intelligen­ce programs reflect those biases and prejudices, despite technology being value-neutral.
/Reuters Bot boxes: In a world where discrimina­tion is rife, it is no wonder our artificial intelligen­ce programs reflect those biases and prejudices, despite technology being value-neutral.

Newspapers in English

Newspapers from South Africa