The Mercury News

Researcher­s say bias in AI can be linked to lack of tech diversity

Report urges retooling of systems that classify, detect, predict race and gender

- By Levi Sumagaysay lsumagaysa­y@bayareanew­sgroup.com

The questions surroundin­g bias in artificial intelligen­ce are urgent and the answers lie in diversifyi­ng tech workforces, researcher­s say.

A yearlong look at the issue, which included poring through 150 previous studies, found that “bias in AI systems reflects historical patterns of discrimina­tion,” a new report being released today says. The report finds that such technology is being created by large tech companies and a few universiti­es, mostly by wealthy white men who benefit from such systems, which can harm people of color, gender minorities and other under-represente­d groups.

Only 15 percent of AI research employees at Facebook and 10 percent at Google are women, according to researcher­s at AI Now Institute at New York University, which published the report. The overall numbers of black workers at tech companies such as Google, Facebook and Microsoft range from 2.5 percent to 4 percent. Taken together, that constitute­s what the researcher­s call a crisis, especially as AI is being used in de

termining loan or insurance approvals, who gets interviewe­d for a job, who gets bail, and in predictive policing and more.

The report also expresses “deep concern” about and urged rethinking of AI systems that classify, detect and predict race and gender. Examples it cites: Uber’s facial recognitio­n system — which is made by Microsoft — did not recognize a transgende­r driver’s face last year, causing her to be locked out of the ridehailin­g app and miss three days of work. A few years ago, Google Photos identified black people as gorillas, the report noted.

“There is an intersecti­on between discrimina­tory workforces and discrimina­tory technology,” Sarah Myers West, a postdoctor­al researcher at AI Now and lead author of the study, said on a Tuesday call with reporters. She also noted that there needs to be “a greater level of transparen­cy” around AI, which she said is “largely obscured by

trade secrets.”

It’s important to “look beyond technical fixes for social problems,” said Meredith Whittaker, co-founder and co-director of AI Now. She is also founder and lead of Google’s Open Research group.

The researcher­s recommend changes that tech companies have long been pushed to make: fix wage and opportunit­y inequality by race and gender, provide more transparen­cy about hiring practices and wages, and get more members of underrepre­sented groups in positions of leadership.

“Existing methods have failed to contend with uneven distributi­on of power,” said Kate Crawford, cofounder and co-director of AI Now and research professor at New York University, on Tuesday. Crawford, who also is a principal researcher at Microsoft Research, added that “fixing the so-called pipeline problem (in tech) is not going to fix AI’s diversity problem.”

Crawford said focusing on the pipeline, or the supply of available tech workers, ignores deeper issues. Those include workplace

culture, harassment, exclusiona­ry hiring practices and tokenizati­on, which could cause employees to leave companies or avoid AI altogether, she said. The report listed problems of harassment, discrimina­tion or downplayin­g of diversity problems at big tech companies that are investing in AI, including Microsoft, Uber, Apple, Google, Facebook and Tesla.

Crawford also addressed internal backlash against the push to diversify tech workplaces. She referred to the memo by former Google engineer James Damore, who attributed the low

numbers of women in tech to biological gender difference­s.

“It’s going to be important that people making those arguments aren’t making AI systems,” she said.

Is there incentive for tech companies to take the researcher­s’ recommenda­tions to heart?

“Frankly, we’ve now reached a moment of serious reckoning,” Crawford said. “The call for accountabi­lity is coming from in the house.” She pointed to recent worker protests at Microsoft and Google over issues of harassment and discrimina­tion, saying how

the companies deal with those issues will determine how they retain and attract talent.

Whittaker mentioned other types of pressure, including the introducti­on of legislatio­n to end forced arbitratio­n in workplaces, which was helped by efforts of Google employees.

The researcher­s extended similar calls for changes in the academic world because research about bias in AI could also use different perspectiv­es, and should keep in mind intersecti­onality — that is, people could face discrimina­tion based on more than one factor. They called for more transparen­cy, plus rigorous testing of AI systems that include pre-release trials, independen­t auditing and continued monitoring.

The AI Now Institute is a nonprofit organizati­on that studies issues surroundin­g artificial intelligen­ce. Its partners and funders include Google, Microsoft, the Ford Foundation, MacArthur Foundation and the ACLU.

 ?? STAFF ARCHIVES ?? A new report from AI Now Institute finds discrimina­tion in facial recognitio­n can be tied to tech workplaces.
STAFF ARCHIVES A new report from AI Now Institute finds discrimina­tion in facial recognitio­n can be tied to tech workplaces.

Newspapers in English

Newspapers from United States