Sun Sentinel Palm Beach Edition

Microsoft to erase face tools in a push for ‘responsibl­e AI’

- Bt Kashmir Hill

For years, activists and academics have been raising concerns that facial analysis software that claims to be able to identify a person’s age, gender and emotional state can be biased, unreliable or invasive — and should not be sold.

Microsoft said this week that it planned to remove those features from its artificial intelligen­ce service for detecting, analyzing and recognizin­g faces. They will stop being available to new users this month and will be phased out for existing users within the year.

The changes are part of a push by Microsoft for tighter controls of its artificial intelligen­ce products. After a two-year review, a team at Microsoft has developed a “Responsibl­e AI Standard,” a 27-page document that sets out requiremen­ts for AI systems to ensure they are not going to have a harmful impact on society.

The requiremen­ts include ensuring that systems provide “valid solutions for the problems they are designed to solve” and “a similar quality of service for identified demographi­c groups, including marginaliz­ed groups.”

Before they are released, technologi­es that would be used to make important decisions about a person’s access to jobs, education, health care, financial services or a life opportunit­y are subject to a review by a team led by Natasha Crampton, Microsoft’s chief responsibl­e AI officer.

There were heightened concerns at Microsoft around the emotion recognitio­n tool, which labeled someone’s expression as anger, contempt, disgust, fear, happiness, neutral, sadness or surprise.

“There’s a huge amount of cultural and geographic and individual variation in the way in which we express ourselves,” Crampton said. That led to reliabilit­y concerns, along with whether “facial expression is a reliable indicator of your internal emotional state,” she said.

The age and gender analysis tools being eliminated — along with other tools to detect facial attributes — could be useful to interpret visual images for blind or low-vision people, for example, but the company decided it was problemati­c to make the profiling tools generally available to the public, Crampton said.

In particular, she added, the system’s so-called gender classifier was binary, “and that’s not consistent with our values.”

Microsoft will also put new controls on its face recognitio­n feature, which can be used to perform identity checks or search for a particular person. Uber uses the software in its app to verify that a driver’s face matches the ID on file for that driver’s account. Software developers who want to use Microsoft’s facial recognitio­n tool will need to apply for access and explain how they plan to deploy it.

Users will also be required to apply and explain how they will use other potentiall­y abusive AI systems, such as Custom Neural Voice. The service can generate a human voice print, based on a sample of someone’s speech, so that authors, for example, can create synthetic versions of their voice to read their audiobooks in languages they don’t speak.

 ?? GRANT HINDSLEY/THE NEW YORK TIMES ?? Natasha Crampton is Microsoft’s chief responsibl­e AI officer. Microsoft is making a push for tighter control of its artificial intelligen­ce products.
GRANT HINDSLEY/THE NEW YORK TIMES Natasha Crampton is Microsoft’s chief responsibl­e AI officer. Microsoft is making a push for tighter control of its artificial intelligen­ce products.

Newspapers in English

Newspapers from United States