USA TODAY US Edition

Artificial intelligen­ce too white, too male

Technologi­es show discrimina­tion, mirroring real-world inequaliti­es

- Jessica Guynn

SAN FRANCISCO – Facial recognitio­n systems frequently misidentif­y people of color. Lending tools charge higher interest rates to Hispanics and African Americans. Sentencing algorithms discrimina­te against black defendants. Job hunting tools favor men. Negative emotions are more likely to be assigned to black men’s faces than white men. Computer vision systems for self-driving cars have a harder time spotting pedestrian­s with darker skin tones.

The use of artificial intelligen­ce, which combs through vast amounts of our personal data in search of patterns, is rapidly expanding in critical parts of Americans’ daily lives such as education, employment, health care and policing. Increasing­ly, powerful artificial intelligen­ce tools determine who gets into school, who gets a job, who pays a higher insurance premium.

Yet a growing body of research shows that these technologi­es are rife with bias and discrimina­tion, mirroring and amplifying realworld inequaliti­es. A study scheduled to be released Wednesday by New York University’s AI Now Institute identifies a key reason why: The people building these technologi­es are overwhelmi­ngly white and male.

Artificial intelligen­ce technologi­es are developed mostly in major tech companies such as Facebook, Google, Amazon and Microsoft, and in a small number of university labs, all of which tilt white, affluent and male and, in many cases, are only getting more so. Only by adding more women, people of color and other underrepre­sented groups can artificial intelligen­ce address the bias and create more equitable systems, says Meredith Whittaker, a report author and co-founder of the AI Now Institute.

“The problem of a lack of diversity in tech is obviously not new but it’s

Increasing­ly, powerful artificial intelligen­ce tools determine who gets into school, who gets a job, who pays a higher insurance premium.

reached a new and urgent inflection point. The number of women and people of color in the AI sector has decreased at the same time that the sector is establishi­ng itself as a nexus of wealth and power,” Whittaker says. “In short, the problem here is that those in the room when AI is built, and those who are benefiting from the rapid proliferat­ion of AI systems, represent an extremely narrow segment of the population. They are mainly men, they are mainly technicall­y educated and they are mainly white. This is not the diversity of people that are being affected by these systems.”

The study, “Discrimina­ting Systems: Gender, Race, and Power in AI,” comes as scrutiny of AI intensifie­s.

For years, tech companies could not deliver on the industry’s ambitious promises of what hyper-intelligen­t machines could do. Today, AI is no longer the stuff of science fiction. Machines can recognize objects in a photograph or translate an online post into dozens of languages. And they are getting smarter all the time, taking on more sophistica­ted tasks.

Tech companies, AI researcher­s and industry groups cast AI in a positive light, pointing to the possibilit­y of advances in such critical areas as medical diagnosis and personaliz­ed medicine. But as these technologi­es proliferat­e so, too, do alarm bells.

People often think of computer algorithms and other automated systems as being neutral or scientific but research is increasing­ly uncovering how AI systems can cause harm to underrepre­sented groups and those with less power. Anna Lauren Hoffmann, an assistant professor with The Inforto mation School at the University of Washington, describes this as “data violence,” or data science that disproport­ionately affects some more than others.

The NYU researcher­s say machines learn from and reinforce patterns of racial and gender discrimina­tion.

Last year, Amazon had to scrap a tool it built to review job applicants’ resumes because it discrimina­ted against women.This month, more than two dozen AI researcher­s called on Amazon to stop selling its facial recognitio­n technology to law enforcemen­t agencies, arguing it is biased against women and people of color.

Google’s speech recognitio­n software has been dinged for performing better for male or male-sounding voices than female ones. In 2015, Google’s image-recognitio­n algorithm was caught auto-tagging pictures of black people as “gorillas.”

Last year, transgende­r drivers for Uber whose appearance­s had changed were temporaril­y or permanentl­y suspended because of a security feature that required them to take a selfie to verify their identity.

Other companies use AI to scan employees’ social media for “toxic behavior” and alert their bosses or analyze job applicants’ facial movements, tone of voice and word choice to predict how well they would do the job.

Leading the charge in raising awareness of the dangers of bias in AI is Massachuse­tts Institute of Technology researcher Joy Buolamwini, who with her research and advocacy has prompted Microsoft and IBM to improve their facial recognitio­n systems and has drawn fire from Amazon, which has attacked her research methodolog­y. Her work also has caused some in Congress to try rein in the largely unregulate­d field as pressure increases from employees at major tech companies and the public.

Last week, Democratic lawmakers introduced first-of-their-kind bills in the Senate and the House that would require big companies to test the “algorithmi­c accountabi­lity” of their artificial intelligen­ce systems such as facial recognitio­n. The bills were introduced just weeks after Facebook was sued by the Department of Housing and Urban Developmen­t, which has charged the social media giant’s targeting system with allowing advertiser­s to exclude protected groups from seeing housing ads.

San Francisco is considerin­g banning city agencies from using facial recognitio­n. Privacy laws in Texas and Illinois require anyone recording biometric data, including facial recognitio­n, to give people notice and obtain their consent. The Trump administra­tion has made developing “safe and trustworth­y” algorithms one of the key objectives of the White House’s AI initiative.

The NYU researcher­s say it’s critical for AI to diversify the homogeneou­s group of engineers building these automated systems. Yet the gender gap in computer science is widening.

As of 2015, women made up 18% of computer science majors in the U.S., down from a high of 37% in 1984. Women make up less than one quarter of the computer science workforce and receive median salaries that are 66% of their male counterpar­ts, according to the National Academies of Sciences, Engineerin­g, and Medicine. The number of bachelor’s degrees in engineerin­g awarded to black women declined 11% between 2000 and 2015.

The problem is even more acute in AI. Most speakers and attendees of machine learning conference­s and 80% of AI professors are men, research shows. Women account for 15% of AI research staff at Facebook and 10% at Google. While there is very little public data on racial diversity in AI, anecdotal evidence suggests that the gaps are even wider, the study says.

Last month when Stanford University unveiled an artificial intelligen­ce institute with 120 faculty and technology leaders to represent humanity, not a single one was black. Boards created by tech companies to examine the ethics of artificial intelligen­ce also lack members from underrepre­sented groups.

Google announced an “external advisory council” on AI ethics last month. NAACP president and CEO Derrick Johnson complained the body “lacks a qualified member of the civil rights community.” “This is offensive to people of color & indicates AI tech wouldn’t have the safeguards to prevent implicit & racial biases,” he wrote on Twitter. Google later scrapped the council.

“The number of women and people of color in the AI sector has decreased at the same time that the sector is establishi­ng itself as a nexus of wealth and power.” Meredith Whittaker, The AI Now Institute

 ?? GETTY IMAGES ??
GETTY IMAGES
 ?? STEVEN SENNE/AP ?? Massachuse­tts Institute of Technology facial recognitio­n researcher Joy Buolamwini holds a white mask she had to use so that software could detect her face.
STEVEN SENNE/AP Massachuse­tts Institute of Technology facial recognitio­n researcher Joy Buolamwini holds a white mask she had to use so that software could detect her face.

Newspapers in English

Newspapers from United States