Times Colonist

Society cannot afford to rely on racist robots

- MARC AND CRAIG KIELBURGER Global Voices Craig and Marc Kielburger are cofounders of the WE movement, which includes WE Charity, ME to WE Social Enterprise and WE Day.

Brisha Borden was walking through her suburban neighbourh­ood in Florida when she spotted an unlocked bike. She took it for a block-long joy ride before dropping it.

It was too late. The cops were already on their way.

Charged with petty theft, the 18-year-old might have been let off with a warning. Instead, when her file was run through state software designed to predict recidivism rates, Borden was rated high-risk and her bond was set at $1,000 US.

She didn’t have an adult criminal record. Algorithms predicted her likelihood to reoffend based on her race — Borden is black.

Will a machine dispense blind justice, or can robots be racist?

Since the early 2000s, various U.S. state courts have used computer programs and machine learning to inform decisions on bail and sentencing. On paper, this makes sense. With prison population­s ballooning across the U.S., artificial intelligen­ce promises to take the human bias out of judgments, creating a fairer legal system — in theory.

Looking into 7,000 risk assessment­s, non-profit journalist­ic group Pro Publica concluded the programs have mistakenly targeted black defendants. The report isolated other factors, such as criminal history, age and gender. Black defendants were still 77 per cent more likely to be labelled high-risk to commit violent crime compared with white defendants.

“We like to think that computers will save us,” says software producer and diversity advocate Shana Bryant. “But we seem to forget that algorithms are written by humans.”

Even code is embedded with social bias.

“The main ingredient [in artificial intelligen­ce] is data,” says Parinaz Sobhani, director of machine learning for tech company Georgian Partners. The more informatio­n is fed through algorithms, the more precise the patterns and predicatio­ns become.

“The question is, where is the data coming from?”

We are at the dawn of the age of artificial intelligen­ce. And to make sure machines don’t mimic society’s implicit prejudices, we need people from all background­s coding them.

Borden’s case of algorithmi­c injustice is just one example. Machine learning is heralded as the future of everything from policing to healthcare.

But making fair machines depends on our ability to supply fair data. In Canada, for instance, Indigenous people are overwhelmi­ngly overrepres­ented in prison population­s. Meanwhile, a persistent wage gap remains between women and men. If we don’t address the systemic failings surroundin­g these problems, we can’t expect machines to fix them while working with the same data.

Socially corrupt data massively failed an early image-recognitio­n software designed by Google that categorize­d black people as gorillas. The program, meant to sort out photos based on their subjects, was tested exclusivel­y on white people. The tech sector, despite many efforts to the contrary, remains overwhelmi­ngly white and male.

“If we don’t have a diverse group of people building technology, it will only serve a very small percentage of people — those who built it,” said Melissa Sariffodee­n, co-founder and CEO of digital literacy non-profit group Ladies Learning Code.

That is why questions about who gets hired in the tech sector are about more than equality in the workforce.

“We are at a nexus point,” Bryant says.

If we don’t prioritize diverse voices in these emerging technologi­es, the future will have robots — but no less prejudice.

 ??  ??

Newspapers in English

Newspapers from Canada