Sun.Star Pampanga

Face recognitio­n researcher fights Amazon over biased AI

-

CAMBRIDGE, Mass. (AP) — Facial recognitio­n technology was already seeping into everyday life — from your photos on Facebook to police scans of mugshots — when Joy Buolamwini noticed a serious glitch: Some of the software couldn’t detect dark-skinned faces like hers.

That revelation sparked the Massachuse­tts Institute of Technology researcher to launch a project that’s having an outsize influence on the debate over how artificial intelligen­ce should be deployed in the real world.

Her tests on software created by brand-name tech firms such as Amazon uncovered much higher error rates in classifyin­g the gender of darker-skinned women than for lighter-skinned men.

Along the way, Buolamwini has spurred Microsoft and IBM to improve their systems and irked Amazon, which publicly attacked her research methods. On Wednesday, a group of AI scholars, including a winner of computer science’s top prize, launched a spirited defense of her work and called on Amazon to stop selling its facial recognitio­n software to police.

Her work has also caught the attention of political leaders in statehouse­s and Congress and led some to seek limits on the use of computer vision tools to analyze human f aces.

“There needs to be a choice,” said Buolamwini, a graduate student and researcher at MIT’s Media Lab. “Right now, what’s happening is these technologi­es are being deployed widely without oversight, oftentimes covertly, so that by the time we wake up, it’s almost too late.”

Buolamwini is hardly alone in expressing caution about the fast-moving adoption of facial recognitio­n by police, government agencies and businesses from stores to apartment complexes. Many other researcher­s have shown how AI systems, which look for patterns in huge troves of data, will mimic the institutio­nal biases embedded in the data they are learning from. For instance, if AI systems are developed using images of mostly white men, the systems will work best in recognizin­g white men.

Those disparitie­s can sometimes be a matter of life or death: One recent study of the computer vision systems that enable self-driving cars to “see” the road shows they have a harder time detecting pedestrian­s with darker skin tones.

What’s struck a chord about Boulamwini’s work is her method of testing the systems created by well-known companies. She applies such systems to a skin-tone scale used by dermatolog­ists, then names and shames those that show racial and gender bias. Buolamwini, who’s also founded a coalition of scholars, activists and others called the Algorithmi­c Justice League, has blended her scholarly investigat­ions with activism.

“It adds to a growing body of evidence that facial recognitio­n affects different groups differentl­y,” said Shankar Narayan, of the American Civil Liberties Union of Washington state, where the group has sought restrictio­ns on the technology. “Joy’s work has been part of building that aw ar en ess. ”

Amazon, whose CEO, Jeff Bezos, she emailed directly last summer, has responded by aggressive­ly taking aim at her research methods.

A Buolamwini-led study published just over a year ago found disparitie­s in how facialanal­ysis systems built by IBM, Microsoft and the Chinese company Face Plus Plus classified people by gender. Darkerskin­ned women were the most misclassif­ied group, with error rates of up to 34.7%. By contrast, the maximum error rate for lighter-skinned males was less than 1%.

The study called for “urgent attention” to address the bias.

“I responded pretty much right away,” said Ruchir Puri, chief scientist of IBM Research, describing an email he received from Buolamwini last year.

Since then, he said, “it’s been a very fruitful relationsh­ip” that informed IBM’s unveiling this year of a new 1 million-image database for better analyzing the diversity of human faces. Previous systems have been overly reliant on what Buolamwini calls “pale male” image repositor i es.

Microsoft, which had the lowest error rates, declined comment. Messages left with Megvii, which owns Face Plus Plus, weren’t immediatel­y returned.

Months after her first study, when Buolamwini worked with University of Toronto researcher Inioluwa Deborah Raji on a follow-up test, all three companies showed major improvemen­ts.

But this time they also added Amazon, which has sold the system it calls Rekognitio­n to law enforcemen­t agencies. The results, published in late January, showed Amazon badly misidentif­ying darker-hued women.

“We were surprised to see that Amazon was where their competitor­s were a year ago,” Buolamwini said.

Amazon dismissed what it called Buolamwini’s “erroneous claims” and said the study confused facial analysis with facial recognitio­n, improperly measuring the former with techniques for evaluating the latter.“The answer to anxieties over new technology is not to run ‘tests’ inconsiste­nt with how the service is designed to be used, and to amplify the test’s false and misleading conclusion­s through the news media,” Matt Wood, general manager of artificial intelligen­ce for Amazon’s cloud-computing division, wrote in a January blog post. Amazon declined requests for an interview.

“I didn’t know their reaction would be quite so hostile,” Buolamwini said recently in an interview at her MIT lab.

Coming to her defense Wednesday was a coalition of researcher­s, including AI pioneer Yoshua Bengio , recent winner of the Turing Award, considered the tech field’s version of the Nobel Prize.

They criticized Amazon’s response, especially its distinctio­n between facial recognitio­n and analysis.

“In contrast to Dr. Wood’s claims, bias found in one system is cause for concern in the other, particular­ly in use cases that could severely impact people’s lives, such as law enforcemen­t applicatio­ns,” they wrote.

Its few publicly known clients have defended Amazon’s sy st em .

Chris Adzima, senior informatio­n systems analyst for the Washington County Sheriff’s Office in Oregon, said the agency uses Amazon’s Rekognitio­n to identify the most likely matches among its collection of roughly 350,000 mug shots. But because a human makes the final decision, “the bias of that computer system is not transferre­d over into any results or any action taken,” Adzima said.

But increasing­ly, regulators and legislator­s are having their doubts. A bipartisan bill in Congress seeks limits on facial recognitio­n. Legislatur­es in Washington and Massachuse­tts are considerin­g laws of their own.

Buolamwini said a major message of her research is that AI systems need to be carefully reviewed and consistent­ly monitored if they’re going to be used on the public. Not just to audit for accuracy, she said, but to ensure face recognitio­n isn’t abused to violate privacy or cause other harms.

“We can’t just leave it to companies alone to do these kinds of checks,” she said.

Newspapers in English

Newspapers from Philippines