Toronto Star

Can an algorithm be racist? Yes — if it’s learning from us

Online tools and ads that respond to user input wind up reflecting human biases, too

- CAITLIN DEWEY THE WASHINGTON POST

When Flickr rolled out image recognitio­n two weeks ago, it flaunted the tool as a major breakthrou­gh in the world of online photos. There was just one, itty-bitty problem: It sometimes tagged black people as “apes” or “animals.” And it slapped the label “jungle gym” on a picture of the concentrat­ion camp Dachau.

These aren’t human errors: They are, in essence, made by a machine. And if you look around the Internet, you’ll notice these algorithmi­c offences happen pretty frequently.

In 2013, research from Harvard found that Google ads for arrest records appear more frequently when you search more ethnic names.

Last year, the think-tank Robinson + Yu warned that financial algorithms used in the mortgage industry frequently treated white and minority homebuyers differentl­y (a criticism also made of Chicago’s predictive crime technology).

In Britain, a female pediatrici­an made internatio­nal news when she was barred from entering a women’s locker room because her gym’s security system automatica­lly coded all “doctors” as male.

And just Tuesday, my colleague Brian Fung uncovered a pretty appalling error on Google Maps: Someone vandalized the location listing for the White House, adding a racial expletive. No one — human or machine — flagged it during Google’s review process.

What’s going on here, exactly? How does a system of equations — unfeeling, inert math — adopt such human biases? After all, no one at Google or Flickr intentiona­lly programs their algorithms to be racist.

They do, however, program these systems to learn from human behaviour and to adapt to it. And on the whole, people are racist.

Take the case of eEffective, a digital ad firm. Last year, the company’s managing director, Nate Carter, was disturbed to see that his algorithm, given the choice of an ad with a white kid and a black kid, kept surfacing white children.

He hadn’t planned it that way: all the algorithm was supposed to do was track which ad people clicked and serve it up more.

But given the choice, people clicked on the white kid. So the algorithm, which is essentiall­y colour-blind, kept displaying it.

“(It) made me wonder, are we racist?” Carter wrote, in a later essay. “Had our racism poisoned my algorithm and turned it into a monster?”

It’s a valid question and one that both technologi­sts and sociologis­ts are still working out — particular­ly when it comes to larger, more complex algorithmi­c systems, whose biases and consequenc­es are harder to suss out.

Anew field of study, called algorithmi­c auditing, attempts to probe these systems and determine where bias is introduced.

Meanwhile, Flickr has promised that it’s “working on a fix,” and Google has suspended map editing until the company can better moderate it.

Whether they succeed or fail, it raises some fascinatin­g existentia­l questions about technology. Like: Do we really want machines to “learn” from us? Maybe not, honestly.

Newspapers in English

Newspapers from Canada