Artificial intelligence too white, too male
Technologies show discrimination, mirroring real-world inequalities
SAN FRANCISCO – Facial recognition systems frequently misidentify people of color. Lending tools charge higher interest rates to Hispanics and African Americans. Sentencing algorithms discriminate against black defendants. Job hunting tools favor men. Negative emotions are more likely to be assigned to black men’s faces than white men. Computer vision systems for self-driving cars have a harder time spotting pedestrians with darker skin tones.
The use of artificial intelligence, which combs through vast amounts of our personal data in search of patterns, is rapidly expanding in critical parts of Americans’ daily lives such as education, employment, health care and policing. Increasingly, powerful artificial intelligence tools determine who gets into school, who gets a job, who pays a higher insurance premium.
Yet a growing body of research shows that these technologies are rife with bias and discrimination, mirroring and amplifying realworld inequalities. A study scheduled to be released Wednesday by New York University’s AI Now Institute identifies a key reason why: The people building these technologies are overwhelmingly white and male.
Artificial intelligence technologies are developed mostly in major tech companies such as Facebook, Google, Amazon and Microsoft, and in a small number of university labs, all of which tilt white, affluent and male and, in many cases, are only getting more so. Only by adding more women, people of color and other underrepresented groups can artificial intelligence address the bias and create more equitable systems, says Meredith Whittaker, a report author and co-founder of the AI Now Institute.
“The problem of a lack of diversity in tech is obviously not new but it’s
Increasingly, powerful artificial intelligence tools determine who gets into school, who gets a job, who pays a higher insurance premium.
reached a new and urgent inflection point. The number of women and people of color in the AI sector has decreased at the same time that the sector is establishing itself as a nexus of wealth and power,” Whittaker says. “In short, the problem here is that those in the room when AI is built, and those who are benefiting from the rapid proliferation of AI systems, represent an extremely narrow segment of the population. They are mainly men, they are mainly technically educated and they are mainly white. This is not the diversity of people that are being affected by these systems.”
The study, “Discriminating Systems: Gender, Race, and Power in AI,” comes as scrutiny of AI intensifies.
For years, tech companies could not deliver on the industry’s ambitious promises of what hyper-intelligent machines could do. Today, AI is no longer the stuff of science fiction. Machines can recognize objects in a photograph or translate an online post into dozens of languages. And they are getting smarter all the time, taking on more sophisticated tasks.
Tech companies, AI researchers and industry groups cast AI in a positive light, pointing to the possibility of advances in such critical areas as medical diagnosis and personalized medicine. But as these technologies proliferate so, too, do alarm bells.
People often think of computer algorithms and other automated systems as being neutral or scientific but research is increasingly uncovering how AI systems can cause harm to underrepresented groups and those with less power. Anna Lauren Hoffmann, an assistant professor with The Inforto mation School at the University of Washington, describes this as “data violence,” or data science that disproportionately affects some more than others.
The NYU researchers say machines learn from and reinforce patterns of racial and gender discrimination.
Last year, Amazon had to scrap a tool it built to review job applicants’ resumes because it discriminated against women.This month, more than two dozen AI researchers called on Amazon to stop selling its facial recognition technology to law enforcement agencies, arguing it is biased against women and people of color.
Google’s speech recognition software has been dinged for performing better for male or male-sounding voices than female ones. In 2015, Google’s image-recognition algorithm was caught auto-tagging pictures of black people as “gorillas.”
Last year, transgender drivers for Uber whose appearances had changed were temporarily or permanently suspended because of a security feature that required them to take a selfie to verify their identity.
Other companies use AI to scan employees’ social media for “toxic behavior” and alert their bosses or analyze job applicants’ facial movements, tone of voice and word choice to predict how well they would do the job.
Leading the charge in raising awareness of the dangers of bias in AI is Massachusetts Institute of Technology researcher Joy Buolamwini, who with her research and advocacy has prompted Microsoft and IBM to improve their facial recognition systems and has drawn fire from Amazon, which has attacked her research methodology. Her work also has caused some in Congress to try rein in the largely unregulated field as pressure increases from employees at major tech companies and the public.
Last week, Democratic lawmakers introduced first-of-their-kind bills in the Senate and the House that would require big companies to test the “algorithmic accountability” of their artificial intelligence systems such as facial recognition. The bills were introduced just weeks after Facebook was sued by the Department of Housing and Urban Development, which has charged the social media giant’s targeting system with allowing advertisers to exclude protected groups from seeing housing ads.
San Francisco is considering banning city agencies from using facial recognition. Privacy laws in Texas and Illinois require anyone recording biometric data, including facial recognition, to give people notice and obtain their consent. The Trump administration has made developing “safe and trustworthy” algorithms one of the key objectives of the White House’s AI initiative.
The NYU researchers say it’s critical for AI to diversify the homogeneous group of engineers building these automated systems. Yet the gender gap in computer science is widening.
As of 2015, women made up 18% of computer science majors in the U.S., down from a high of 37% in 1984. Women make up less than one quarter of the computer science workforce and receive median salaries that are 66% of their male counterparts, according to the National Academies of Sciences, Engineering, and Medicine. The number of bachelor’s degrees in engineering awarded to black women declined 11% between 2000 and 2015.
The problem is even more acute in AI. Most speakers and attendees of machine learning conferences and 80% of AI professors are men, research shows. Women account for 15% of AI research staff at Facebook and 10% at Google. While there is very little public data on racial diversity in AI, anecdotal evidence suggests that the gaps are even wider, the study says.
Last month when Stanford University unveiled an artificial intelligence institute with 120 faculty and technology leaders to represent humanity, not a single one was black. Boards created by tech companies to examine the ethics of artificial intelligence also lack members from underrepresented groups.
Google announced an “external advisory council” on AI ethics last month. NAACP president and CEO Derrick Johnson complained the body “lacks a qualified member of the civil rights community.” “This is offensive to people of color & indicates AI tech wouldn’t have the safeguards to prevent implicit & racial biases,” he wrote on Twitter. Google later scrapped the council.
“The number of women and people of color in the AI sector has decreased at the same time that the sector is establishing itself as a nexus of wealth and power.” Meredith Whittaker, The AI Now Institute