Boston Sunday Globe

Joy Buolamwini

ALGORITHMI­C JUSTICE LEAGUE ARTIFICIAL INTELLIGEN­CE

- BY AIDAN RYAN Aidan Ryan is a Boston Globe business reporter. Send comments to aidan.ryan@globe.com.

21

Joy Buolamwini didn’t set out to become one of the best-known critics of artificial intelligen­ce. But while a graduate student at MIT in 2015, Buolamwini — who is Black — discovered that facial recognitio­n software didn’t work for her face. That kicked off her journey from aspiring academic to accidental advocate, leading her to formulate the concept of “coded gaze,” or how the priorities, preference­s, and prejudices of technologi­sts get built into algorithms, software, and other products. That insight has grounded her work as she raises awareness of how technology can perpetuate bias and discrimina­tion — and inflict harm — even without malicious intent.

Her work over nearly a decade couldn’t have set Buolamwini up better for this moment. Big tech companies (such as Google, Microsoft, and Amazon) and AI-focused firms (such as OpenAI and Anthropic) are investing billions of dollars into AI-powered chatbots and applicatio­ns that generate images and video from text prompts. “While no one is immune to algorithmi­c abuse,” Buolamwini says, “those already marginaliz­ed in society shoulder an even larger burden.”

Following her experience at MIT, Buolamwini gave a TED talk on combating bias in algorithms. In 2016, she founded the Algorithmi­c Justice League — a nonprofit organizati­on that spreads awareness of the coded gaze through art and research. She now serves as the organizati­on’s president and artist in chief and takes on speaking engagement­s across the world. Buolamwini was the subject of the 2020 Netflix documentar­y Coded Bias.

Buolamwini says warnings about AI are more relevant than ever before. She worries about it being used in weapons systems to target people and about the military technology being adapted for policing.

But, her biggest concern may be how AI can “kill people slowly.”

Her first book, Unmasking AI: My Mission to Protect What Is Human in a World of Machines, argues AI can play a role in denying people access to housing, health care, and other necessitie­s. If AI is used to screen applicants for a job or claimants for health benefits, she says, it can perpetuate discrimina­tion through the coded gaze — and harm people throughout their lives.

“By recognizin­g and acknowledg­ing the existence of the coded gaze,” Buolamwini says, “people can work toward mitigating algorithmi­c bias, preventing AI harms, and ensuring that technology is used ethically and responsibl­y to prevent harm and discrimina­tion.” ª

 ?? ??

Newspapers in English

Newspapers from United States