Society cannot afford to rely on racist robots
Brisha Borden was walking through her suburban neighbourhood in Florida when she spotted an unlocked bike. She took it for a block-long joy ride before dropping it.
It was too late. The cops were already on their way.
Charged with petty theft, the 18-year-old might have been let off with a warning. Instead, when her file was run through state software designed to predict recidivism rates, Borden was rated high-risk and her bond was set at $1,000 US.
She didn’t have an adult criminal record. Algorithms predicted her likelihood to reoffend based on her race — Borden is black.
Will a machine dispense blind justice, or can robots be racist?
Since the early 2000s, various U.S. state courts have used computer programs and machine learning to inform decisions on bail and sentencing. On paper, this makes sense. With prison populations ballooning across the U.S., artificial intelligence promises to take the human bias out of judgments, creating a fairer legal system — in theory.
Looking into 7,000 risk assessments, non-profit journalistic group Pro Publica concluded the programs have mistakenly targeted black defendants. The report isolated other factors, such as criminal history, age and gender. Black defendants were still 77 per cent more likely to be labelled high-risk to commit violent crime compared with white defendants.
“We like to think that computers will save us,” says software producer and diversity advocate Shana Bryant. “But we seem to forget that algorithms are written by humans.”
Even code is embedded with social bias.
“The main ingredient [in artificial intelligence] is data,” says Parinaz Sobhani, director of machine learning for tech company Georgian Partners. The more information is fed through algorithms, the more precise the patterns and predications become.
“The question is, where is the data coming from?”
We are at the dawn of the age of artificial intelligence. And to make sure machines don’t mimic society’s implicit prejudices, we need people from all backgrounds coding them.
Borden’s case of algorithmic injustice is just one example. Machine learning is heralded as the future of everything from policing to healthcare.
But making fair machines depends on our ability to supply fair data. In Canada, for instance, Indigenous people are overwhelmingly overrepresented in prison populations. Meanwhile, a persistent wage gap remains between women and men. If we don’t address the systemic failings surrounding these problems, we can’t expect machines to fix them while working with the same data.
Socially corrupt data massively failed an early image-recognition software designed by Google that categorized black people as gorillas. The program, meant to sort out photos based on their subjects, was tested exclusively on white people. The tech sector, despite many efforts to the contrary, remains overwhelmingly white and male.
“If we don’t have a diverse group of people building technology, it will only serve a very small percentage of people — those who built it,” said Melissa Sariffodeen, co-founder and CEO of digital literacy non-profit group Ladies Learning Code.
That is why questions about who gets hired in the tech sector are about more than equality in the workforce.
“We are at a nexus point,” Bryant says.
If we don’t prioritize diverse voices in these emerging technologies, the future will have robots — but no less prejudice.