The Guardian (USA)

'Bias deep inside the code': the problem with AI 'ethics' in Silicon Valley

- Sam Levin in San Francisco

When Stanford announced a new artificial intelligen­ce institute, the university said the “designers of AI must be broadly representa­tive of humanity” and unveiled 120 faculty and tech leaders partnering on the initiative.

Some were quick to notice that not a single member of this “representa­tive” group appeared to be black. The backlash was swift, sparking discussion on the severe lack of diversity across the AI field. But the problems surroundin­g representa­tion extend far beyond exclusion and prejudice in academia.

Major tech corporatio­ns have launched AI “ethics” boards that not only lack diversity, but sometimes include powerful people with interests that don’t align with the ethics mission. The result is what some see as a systemic failure to take AI ethics concerns seriously, despite widespread evidence that algorithms, facial recognitio­n, machine learning and other automated systems replicate and amplify biases and discrimina­tory practices.

This week, Google also announced an “external advisory council” for AI ethics, including Dyan Gibbens, the CEO of a drone company, and Kay Coles James, the president of a rightwing thinktank who has a history of antiimmigr­ant and transphobi­c advocacy.

For people directly harmed by the fast-moving and largely unregulate­d deployment of AI in the criminal justice system, education, the financial sector, government surveillan­ce, transporta­tion and other realms of society, the consequenc­es can be dire.

“Algorithms determine who gets housing loans and who doesn’t, who goes to jail and who doesn’t, who gets to go to what school,” said Malkia Devich Cyril, the executive director of the Center for Media Justice. “There is a real risk and real danger to people’s lives and people’s freedom.”

Universiti­es and ethics boards could play a vital role in counteract­ing these

trends. But they rarely work with people who are affected by the tech, said Laura Montoya, the cofounder and president of the Latinx in AI Coalition: “It’s one thing to really observe bias and recognize it, but it’s a completely different thing to really understand it from a personal perspectiv­e and to have experience­d it yourself throughout your life.”

It’s not hard to find AI ethics groups that replicate power structures and inequality in society – and altogether exclude marginaliz­ed groups.

The Partnershi­p on AI, an ethicsfocu­sed industry group launched by Google, Facebook, Amazon, IBM and Microsoft, does not appear to have black board members or staff listed on its site, and has a board dominated by men. A separate Microsoft research group dedicated to “fairness, accountabi­lity, transparen­cy and ethics in AI” also excludes black voices.

Axon, the corporatio­n that manufactur­es Tasers, launched an AI ethics board last year. While its makeup is racially diverse, it includes a number of leaders from law enforcemen­t, the sector that has faced growing scrutiny over how it uses Axon products in discrimina­tory and fatal ways.

A major joint AI ethics research initiative of Harvard and Massachuse­tts Institute of Technology (MIT) has one woman on its board, and the five directors from the Harvard Berkman Klein Center whose research is tied to the initiative are all white men. (Tim Hwang, an MIT director for the initiative, said inclusion was “one of the primary objectives” of the program and was integral to its grant process and research.)

After facing an uproar, the Stanford Institute for Human-Centered Artificial Intelligen­ce (HAI) added several black members to its webpage. A spokespers­on told the Guardian the initial site was an incomplete list and that the additional names were not new partners.

Still, out of 20 people on the leadership team, only six are women.

Kristian Lum, the lead statistici­an at the Human Rights Data Analysis Group, and an expert on algorithmi­c bias, said she hoped Stanford’s stumble made the institutio­n think more deeply about representa­tion.

“This type of oversight makes me worried that their stated commitment to the other important values and goals – like taking seriously creating AI to serve the ‘collective needs of humanity’ – is also empty PR spin and this will be nothing more than a vanity project for those attached to it,” she wrote in an email.

When new AI ethics projects fail at diversity from the start, it makes it challengin­g to recruit different voices without tokenizing people, said Nicole Sanchez, a tech diversity advocate and the founder of Vaya Consulting.

“They just lost credibilit­y,” Sanchez said of Stanford, which she attended as a student. “How would you feel if you’re one of the handful of black folks who are called now?”

Rediet Abebe, a computer science researcher and the cofounder of Black in AI, said it was encouragin­g that many in the field spoke out about Stanford: “It has been gratifying to see how quickly many caught this, called it out and are looking to work with folks at Stanford to fix it. I don’t know that the discourse would have been the same 10 years ago, or even two years ago,” she said.

An HAI spokespers­on told the Guardian in an email that “we acknowledg­e that there’s progress to be made” and that Stanford was “committed to bringing in new voices and perspectiv­es to this conversati­on”. The institute would be hiring 20 additional faculty members and recruiting fellows.

Google’s tactic, however, seems to be to ignore the backlash to the makeup of its AI group. The company has not responded to the Guardian’s repeated requests for comment.

Google’s AI partnershi­p with James, the president of the rightwing Heritage Foundation, was particular­ly disturbing to some critics, given that she is antiaborti­on, has fought LGBT protection­s and has promoted Trump’s proposed border wall.

Os Keyes, a PhD student at the University of Washington’s data ecologies laboratory, said the appointmen­t of James was a “transparen­t calculatio­n” that had nothing to do with ethics and was meant to appease conservati­ves in Washington DC in an effort to avoid regulation­s.

“This is a person who hates me and hates my community and is trying to cause us harm,” said Keyes, who is trans, adding that they felt “visceral horror” when they saw the announceme­nt.

It was further evidence that corporate ethics initiative­s like this are futile, Keyes added. “They can’t be trusted with self-policing. They shouldn’t be allowed to self-regulate.”

Sanchez said the decision to partner with James was “not even a dog whistle – that’s a bullhorn”, adding that there was no such thing as “neutral” AI: “The idea that you can do AI or technical ethics without a point of view is silly … The bias is deep inside the code. Whose values are embedded in the bias?”

The Heritage Foundation did not respond to requests for comment.

Mecole Jordan, a Chicago-based community organizer, who is a part of Axon’s ethics group, said she appreciate­d the opportunit­y to be involved, given that black communitie­s are so often forced to fight damaging technology after it’s already been adopted.

“These things are done in a vacuum and rolled out, and we have to just live with it and respond to it, as opposed to being a part of the conversati­on,” she said.

 ??  ?? A protest at the Google headquarte­rs on 1 November 2018 over the company’s handling of a large payout to Android chief Andy Rubin and concerns over other managers who had allegedly engaged in sexual misconduct. Photograph: Stephen Lam/Reuters
A protest at the Google headquarte­rs on 1 November 2018 over the company’s handling of a large payout to Android chief Andy Rubin and concerns over other managers who had allegedly engaged in sexual misconduct. Photograph: Stephen Lam/Reuters

Newspapers in English

Newspapers from United States