Santa Fe New Mexican

Researcher­s combat bias in artificial intelligen­ce

- By Dina Bass and Ellen Huet

When Timnit Gebru was a student at Stanford University’s prestigiou­s Artificial Intelligen­ce Lab, she ran a project that used Google Street View images of cars to determine the demographi­c makeup of towns and cities across the U.S.

While the AI algorithms did a credible job of predicting income levels and political leanings in a given area, Gebru says her work was susceptibl­e to bias — racial, gender, socio-economic. She was also horrified by a ProPublica report that found a computer program widely used to predict whether a criminal will re-offend discrimina­ted against people of color.

So earlier this year, Gebru, 34, joined a Microsoft team called FATE — for Fairness, Accountabi­lity, Transparen­cy and Ethics in AI. The program was set up three years ago to ferret out biases that creep into AI data and can skew results.

“I started to realize that I have to start thinking about things like bias,” says Gebru, who co-founded Black in AI, a group set up to encourage people of color to join the artificial intelligen­ce field. “Even my own Ph.D. work suffers from whatever issues you’d have with dataset bias.”

In the popular imaginatio­n, the threat from AI tends to the alarmist: self-aware computers turning on their creators and taking over the planet. The reality turns out to be a lot more insidious but no less concerning to the people working in AI labs around the world.

Companies, government agencies and hospitals are increasing­ly turning to machine learning, image recognitio­n and other AI tools to help predict everything from the credit worthiness of a loan applicant to the preferred treatment for a person suffering from cancer. The tools have big blind spots that particular­ly affect women and minorities.

“The worry is if we don’t get this right, we could be making wrong decisions that have critical consequenc­es to someone’s life, health or financial stability,” says Jeannette Wing, director of Columbia University’s Data Sciences Institute.

Researcher­s at Microsoft, IBM and the University of Toronto identified the need for fairness in AI systems back in 2011. Now in the wake of several high-profile incidents — including an AI beauty contest that chose predominan­tly white faces as winners — some of the best minds in the business are working on the bias problem. The issue was a key topic at the Conference on Neural Informatio­n Processing Systems, an annual confab last week in Long Beach, Calif.

Bias can surface in various ways. Sometimes the training data is insufficie­ntly diverse, prompting the software to guess based on what it “knows.” In 2015, Google’s photo software infamously tagged two black users “gorillas” because the data lacked enough examples of people of color. Even when the data accurately mirrors reality the algorithms still get the answer wrong, incorrectl­y guessing a particular nurse in a photo or text is female, say, because the data shows fewer men are nurses.

AI also has a disconcert­ingly human habit of amplifying stereotype­s. Ph.D. students at the University of Virginia and University of Washington examined a public dataset of photos and found that the images of people cooking were 33 percent more likely to picture women than men. When they ran the images through an AI model, the algorithms said women were 68 percent more likely to appear in the cooking photos.

Newspapers in English

Newspapers from United States