The Mercury News

Tech firms address bias in facial recognitio­n.

Technology has had problems identifyin­g non-white people

- By Levi Sumagaysay lsumagaysa­y@bayareanew­sgroup.com

If a picture paints a thousand words, facial recognitio­n paints two: It’s biased.

You might remember a few years ago that Google Photos automatica­lly tagged images of black people as “gorillas,” or Flickr (owned by Yahoo at the time) doing the same and tagging people as “apes” or “animals.”

Earlier this year, the New York Times reported on a study by Joy Buolamwini, a researcher at the MIT Media Lab, on artificial intelligen­ce, algorithms and bias: She found that facial recognitio­n is most accurate for white men, and least accurate for darker-skinned people, especially women.

Now — as facial recognitio­n is being considered for use or is being used by police, airports, immigratio­n officials and others — Microsoft says it has improved its facial-recognitio­n technology to the point where it has reduced error rates for darkerskin­ned men and women by up to 20 times. For women alone, the company says it has reduced error rates by nine times.

Microsoft made improvemen­ts by collecting more data and expanding and revising the datasets it used to train its AI.

From a company blog post Tuesday: “The higher error rates on females with darker skin highlights an industrywi­de challenge: Artificial intelligen­ce technologi­es are only as good as the data used to train them. If a facial recognitio­n system is to perform well across all people, the training dataset needs to represent a diversity of skin tones as well as factors such as hairstyle, jewelry and eyewear.”

In other words, the company that brought us Tay, the sexcrazed and Nazi-loving chatbot, wants us to know it is trying, it’s really trying. (You might also remember that Microsoft took its AI experiment Tay offline in 2016 after she quickly began to spew crazy and racist things on Twitter, reflecting the stuff she learned online. The company blamed a “coordinate­d attack by a subset of people” for Tay’s corruption.)

In related news, IBM announced Wednesday that it will release the world’s largest facial dataset to technologi­sts and researcher­s, to help in studying

bias. It’s actually releasing two datasets this fall: one that has more than 1 million images, and another that has 36,000 facial images equally distribute­d by ethnicity, gender and age.

“Our researcher­s are assembling images from public databases — which will be annotated with attributes — leveraging already existing images that have been approved for research use to reduce sample selection bias,” said Jenny Galitz McTighe, vice president with Watson & AI Communicat­ions, Wednesday.

Big Blue also said it improved its Watson Visual Recognitio­n service for facial analysis, decreasing its error rate by nearly tenfold, earlier this year.

“AI holds significan­t power to improve the way we live and work, but only if AI systems are developed and trained responsibl­y, and produce outcomes we trust,” IBM said in a blog post Wednesday. “Making sure that the system is trained on balanced data, and rid of biases, is critical to achieving such trust.”

 ?? THE ASSOCIATED PRESS ARCHIVES ?? IBM is releasing large datasets of diverse facial images to help Microsoft improve accuracy identifyin­g people with darker skin.
THE ASSOCIATED PRESS ARCHIVES IBM is releasing large datasets of diverse facial images to help Microsoft improve accuracy identifyin­g people with darker skin.

Newspapers in English

Newspapers from United States