Regina Leader-Post

Will machines stop racism?

Remember, they are programmed by humans, write Marc and Craig Kielburger.

- Craig and Marc Kielburger are the co-founders of the WE movement, which includes WE Charity, ME to WE Social Enterprise and WE Day. For more dispatches from WE, check out WE Stories at we.org.

Brisha Borden was walking through her suburban Florida neighbourh­ood when she spotted an unlocked bike. She took it for a joyride before dropping it.

It was too late. The cops were already on their way. Charged with petty theft, the 18-year-old might have been let off with a warning. Instead, when her file was run through state software designed to predict recidivism rates, Borden was rated high-risk and her bond was set at $1,000.

She didn’t have an adult criminal record. Algorithms predicted her likelihood to reoffend based on her race — Borden is black.

Will a machine dispense blind justice, or can robots be racist?

Since the early 2000s, various state courts have used computer programs to inform decisions on bail and sentencing. On paper, this makes sense. With prison population­s ballooning across the U.S., artificial intelligen­ce promises to take the human bias out of judgments, creating a fairer legal system — in theory.

Looking into 7,000 risk assessment­s, Pro Publica concluded the programs have mistakenly targeted black defendants. The report isolated other factors, like criminal history, age and gender — black defendants were still 77 per cent more likely to be labelled high-risk of violent crime compared with white defendants.

“We like to think that computers will save us,” says software producer and diversity advocate Shana Bryant. “But we seem to forget that algorithms are written by humans.”

Even code is embedded with social bias.

“The main ingredient (in artificial intelligen­ce) is data,” explains Parinaz Sobhani, director of machine learning for Georgian Partners. The more informatio­n is fed through algorithms, the more precise the patterns and predicatio­ns become.

“The question is, where is the data coming from?”

We are at the dawn of the age of artificial intelligen­ce. And to make sure machines don’t mimic society’s implicit prejudices, we need people from all background­s coding them.

Borden’s case of algorithmi­c injustice is just one example. Machine learning is heralded as the future of everything from policing to health care.

But making fair machines depends on our ability to supply fair data. In Canada, for instance, Indigenous people are overwhelmi­ngly overrepres­ented in prison population­s. Meanwhile, a wage gap remains between women and men. If we don’t address the systemic failings surroundin­g these problems, we can’t expect machines to fix them while working with the same data.

Socially corrupt data massively failed an early image-recognitio­n software designed by Google that categorize­d black people as gorillas. The program, meant to sort photos based on their subjects, was tested exclusivel­y on white people. The tech sector, despite efforts to the contrary, remains overwhelmi­ngly white and male.

“If we don’t have a diverse group of people building technology, it will only serve a very small percentage of people: those who built it,” explains Melissa Sariffodee­n, co-founder and CEO of Ladies Learning Code.

That’s why questions about who gets hired in the tech sector are about more than equality in the workforce.

“We are at a nexus point,” explains Bryant. If we don’t prioritize diverse voices in these emerging technologi­es, the future will have robots — but no less prejudice.

Newspapers in English

Newspapers from Canada