Waterloo Region Record

Tech trouble

Facial recognitio­n works best if you’re a white guy, study shows

- STEVE LOHR

Facial recognitio­n technology is improving by leaps and bounds. Some commercial software can now tell the gender of a person in a photograph.

When the person in the photo is a white man, the software is right 99 per cent of the time.

But the darker the skin, the more errors arise — up to nearly 35 per cent for images of darker skinned women, according to a new study that breaks fresh ground by measuring how the technology works on people of different races and gender.

These disparate results, calculated by Joy Buolamwini, a researcher at the Massachuse­tts Institute of Technology Media Lab, show how some of the biases in the real world can seep into artificial intelligen­ce, the computer systems that inform facial recognitio­n.

In modern artificial intelligen­ce, data rules. AI software is only as smart as the data used to train it. If there are many more white men than black women in the system, it will be worse at identifyin­g the black women.

One widely used facial recognitio­n data set was estimated to be more than 75 per cent male and more than 80 per cent white, according to another research study.

The new study also raises broader questions of fairness and accountabi­lity in artificial intelligen­ce at a time when investment in and adoption of the technology is racing ahead.

Today, facial recognitio­n software is being deployed by companies in various ways, including to help target product pitches based on social media profile pictures. But companies are also experiment­ing with facial identifica­tion and other AI technology as an ingredient in automated decisions with higher stakes like hiring and lending.

Researcher­s at the Georgetown Law School estimated that 117 million American adults are in facial recognitio­n networks used by law enforcemen­t — and that African Americans were most likely to be singled out, because they were disproport­ionately represente­d in mug-shot databases.

Facial recognitio­n technology is lightly regulated so far.

“This is the right time to be addressing how these AI systems work and where they fail — to make them socially accountabl­e,” said Suresh Venkatasub­ramanian, a professor of computer science at the University of Utah.

Buolamwini, a young African-American computer scientist, experience­d the bias of facial recognitio­n firsthand. When she was an undergradu­ate at the Georgia Institute of Technology, programs would work well on her white friends, she said, but not recognize her face at all. She figured it was a flaw that would surely be fixed before long.

But a few years later, after joining the MIT Media Lab, she ran into the missing-face problem again. Only when she put on a white mask did the software recognize hers as a face.

By then, facial recognitio­n software was increasing­ly moving out of the lab and into the mainstream.

“OK, this is serious,” she recalled deciding then. “Time to do something.”

So she turned her attention to fighting the bias built into digital technology. Now 28 and a doctoral student, after studying as a Rhodes scholar and a Fulbright fellow, she is an advocate in the new field of “algorithmi­c accountabi­lity,” which seeks to make automated decisions more transparen­t, explainabl­e and fair.

This is the right time to be addressing how these AI systems work and where they fail. SURESH VENKATASUB­RAMANIAN

Her short TED Talk on coded bias has been viewed more than 940,000 times, and she founded the Algorithmi­c Justice League, a project to raise awareness of the issue.

In her newly published paper, which will be presented at a conference this month, Buolamwini studied the performanc­e of three leading facial recognitio­n systems — by Microsoft, IBM and Megvii of China — by classifyin­g how well they could guess the gender of people with different skin tones. These companies were selected because they offered gender classifica­tion features in their facial analysis software — and their code was publicly available for testing.

She found them all wanting.

To test the commercial systems, Buolamwini built a data set of 1,270 faces, using faces of lawmakers from countries with a high percentage of women in office.

The sources included three African nations with predominan­tly dark-skinned population­s, and three Nordic countries with mainly light-skinned residents.

The African and Nordic faces were scored according to a sixpoint labelling system used by dermatolog­ists to classify skin types.

The medical classifica­tions were determined to be more objective and precise than race.

Then, each company’s software was tested on the curated data, crafted for gender balance and a range of skin tones. The results varied somewhat. Microsoft’s error rate for darkerskin­ned women was 21 per cent, while IBM’s and Megvii’s rates were nearly 35 per cent. They all had error rates below one per cent for light-skinned males.

Buolamwini shared the research results with each of the companies. IBM said in a statement to her that the company had steadily improved its facial analysis software and was “deeply committed” to “unbiased” and “transparen­t” services. This month, the company said, it will roll out an improved service with a nearly 10-fold increase in accuracy on darker-skinned women.

Microsoft said that it had “already taken steps to improve the accuracy of our facial recognitio­n technology” and that it was investing in research “to recognize, understand and remove bias.”

 ??  ??
 ?? TONY LUOUNG NEW YORK TIMES ?? Joy Buolamwini, a researcher at the MIT Media Lab, conducted a study that measured how facial recognitio­n technology works on people of different races and gender.
TONY LUOUNG NEW YORK TIMES Joy Buolamwini, a researcher at the MIT Media Lab, conducted a study that measured how facial recognitio­n technology works on people of different races and gender.

Newspapers in English

Newspapers from Canada