The Hamilton Spectator

Racism is a human construct, but it’s in our technology, too

- SIMON WOODSIDE Simon Woodside is a software developer and startup co-founder living in Hamilton.

AI algorithms have been found to be more likely to misidentif­y Black people, in some cases sending the wrong person to jail

Racism is a human trait, but cameras are presumably impartial. They capture reality as it is, not as we believe it to be. But what if our cameras had been unintentio­nally programmed with algorithms that treat white people differentl­y than Black people?

Can a machine be racist? Anyone reading this can conduct a simple experiment to find out the truth or falsehood of this question. Simply take your camera or camera phone, and take two pictures, one of a white person, and one of a person with dark skin. You will likely find that the second person is poorly rendered in the photo compared to the first.

I opened the print edition of my local newspaper recently and looked at the striking difference between two photograph­s — one of a white man, one darkskinne­d.

The poor quality of the second image (Jagmeet Singh, leader of the federal

NDP) is not an accident, it’s a result of the way cameras and photo display systems work.

A modern camera has been imbued with a certain amount of artificial intelligen­ce that enables it to locate faces as you take a picture, and it then adjusts the brightness, contrast, and colour to make the face clearly visible. The software algorithm that performs these measuremen­ts is highly optimized technology that operates in the blink of an eye. It then sacrifices other, less important parts of the picture, in order to make sure that those all important faces are clearly rendered.

If we find that a white person’s face is well rendered, and a dark-skinned person’s face is indistinct, we would do well to wonder if there is a problem with the software. And indeed, there is a long history, going back to film, of testing photo rendering on white people, and mostly ignoring people with other skin colours. For example, Kodak for years used the “Shirley Card,” a photo of a white woman with brown hair to calibrate all of their photo processing. They only adjusted it after many complaints.

In today’s digital cameras, face recognitio­n uses artificial intelligen­ce algorithms trained on large numbers of images, a set of photos that are meant to be representa­tive of the people we will all be taking pictures of. There’s a whole process of collecting, analyzing and preparing these “training sets,” and if racial bias creeps into the process, you could wind up with a lot of well-prepared photos of white people and very few of darker-skinned people. When the AI is trained, it inherits this bias, and the software, without any intention, itself becomes biased.

This problem gets worse, though, as AI and face recognitio­n go beyond correctly exposing photos, and move into identifyin­g individual­s. In various realworld cases, AI algorithms have been found to be more likely to misidentif­y

Black people, in some cases sending the wrong person to jail. As we move into a more machine-driven society, the stakes of systemic racism in our software rise higher and higher. What if in the future, a self-driven car sees a white person and doesn’t see a Black person? Lives are, in a very real way, at stake.

If there’s a lesson here, it’s that racism is all-pervasive. It’s not just about police violence. It goes deep, so deep that it’s contained in a device that almost everyone reading this article has in their pocket. The next time you take a picture, perhaps take a moment to reflect on what a small thing a camera is to hide systemic racism.

Newspapers in English

Newspapers from Canada