Microsoft facial ID gets better identifying people of color
Microsoft last week announced its facial-recognition system is now more accurate in identifying people of color, touting its progress at tackling one of the technology’s biggest biases.
But critics, citing Microsoft’s work with Immigration and Customs Enforcement, quickly seized on how that improved technology might be used. The agency contracts with Microsoft for a set of cloud-computing tools that the tech giant says is largely limited to office work, but which can also include face recognition.
Columbia University professor Alondra Nelson tweeted, “We must stop confusing ‘inclusion’ in more ‘diverse’ surveillance systems with justice and equality.”
Today’s facial-recognition systems more often misidentify people of color because of a long-running data problem: The massive sets of facial images they train on skew heavily toward white men. A Massachusetts Institute of Technology study this year of the face-recognition systems designed by Microsoft, IBM and the China-based Face+ found that their accuracy in classifying a person’s gender was 99 percent for light-skinned males and 70 percent for dark-skinned females.
In a project debuted last Thursday, Joy Buolamwini, an artificial-intelligence researcher at the MIT Media Lab, showed facial-recognition systems consistently giving the wrong gender for famous women of color, including Oprah, Serena Williams, Michelle Obama and Shirley Chisholm, the first black female member of Congress. “Can machines ever see our grandmothers as we knew them?” she said.
The companies have in recent months responded by pouring many more photos into the mix, hoping to train the systems to better tell the differences among more than just white faces. IBM said last week it used 1 million facial images taken from the photo-sharing site Flickr to build the “world’s largest facial dataset,” which it will release publicly for other companies to use.
Both IBM and Microsoft said that allowed its systems to recognize gender and skin tone with much more precision. Microsoft said its improved system had reduced the error rates for darker-skinned men and women by “up to 20 times,” and reduced error rates for all women by nine times. The company did not define a baseline for that reduction or give an estimate of accuracy, which can vary widely depending on factors such as image quality.
Those improvements were heralded by some for taking aim at the prejudices in a rapidly spreading technology, including potentially reducing the kinds of false positives that could lead police officers to misidentify a criminal suspect. Clare Garvie, an associate at Georgetown Law’s Center on Privacy & Technology, said, “Any effort by companies to make their systems more equitable and accurate across demographics can only be a good thing.”
But others suggested the technology’s increasing accuracy could also make it more marketable. The systems should be accurate, “but that’s just the beginning, not the end, of their ethical obligation,” said David Robinson, managing director of the think tank Upturn, which co-signed a letter in April calling face recognition “categorically unethical to deploy.”
Face recognition’s promise of a simple, long-range identification system has made it a compelling tool for criminal justice, private security and mass surveillance. But for the companies racing to develop and sell it, the technology can also function as a double-edged sword, in which pushes to refine its capabilities can be seen as potentially dangerous or morally fraught.
At the center of that debate is Microsoft, whose multimillion-dollar contracts with ICE came under fire amid the agency’s separations of migrant parents and children at the Mexican border.
Face recognition is a core feature of Azure Government, the cloud-computing system Microsoft has promoted to ICE and other agencies as a way to efficiently process lots of data and tap artificial-intelligence applications such as image analysis and real-time translation.