Researchers call facial recognition ‘imperfect’
Survey: Technology brings mixed feelings
In the matter of public opinion, tech giants such as Facebook, Apple and Amazon aren’t trustworthy when it comes to facial recognition software. On the other hand, most Americans believe that government agencies and law enforcement officials use the identification tools responsibly, a study finds.
The Pew Research Center released a report Thursday that sought to examine attitudes among U.S. adults toward facial recognition.
The survey suggests that even as artificial-intelligence-powered cameras are raising questions about fairness and accuracy, more than half of Americans believe law enforcement agencies will put the tech to good use (56%).
By contrast, about a third of adults think tech companies will use the tools responsibly. Half of U.S. adults say they either don’t trust “too much” how companies use facial recognition or they don’t trust how they implement the tools “at all.”
Widespread adoption of the controversial software is a relatively new phenomenon, with the technology cropping up in airports and in city surveillance cameras just this year. Still, most Americans are at least somewhat familiar with the concept.
In fact, 86% have heard of facial recognition, and awareness spreads across almost all demographics. There are, however, stark contrasts with how ethnic groups embrace the technology.
A larger share of whites (64%) believes it’s acceptable for law enforcement to use facial recognition in public spaces. That sentiment was shared by fewer than half of blacks (47%) and 55% of Hispanics.
Accuracy
One of the most talked-about criticisms of facial recognition is that it can lead to racial profiling.
In 2018, an MIT researcher uncovered that some facial recognition software could accurately identify a white man but fail at identifying a person with darker skin. That year, Amazon’s controversial facial recognition program, Rekognition, falsely identified 28 members of Congress during a test by the American Civil Liberties Union.
Amazon said its technology can be used for a long list of beneficial purposes “from assisting in the identification of criminals to helping find missing children to inhibiting human trafficking.”
According to Pew Research data scientists Stefan Wojcik and Emma Remy, “These systems can fail in ways that seem difficult to understand and hard to predict – such as showing higher rates of error on the faces of people with darker skin relative to those with lighter skin, or classifying prominent members of Congress as criminals.”
After conducting several tests to find out if facial recognition could accurately identify genders based on outward appearances, the researchers concluded deep learning systems that are trained to identify humans can be “imperfect,” which is “is to be expected.”
“What may be less obvious is that they can be significantly less reliable for some groups than others – and that these differences may not necessarily be driven by intuitive or obvious factors,” researchers wrote in the report.
“People who rely on the decisions these systems make should approach the results they produce with the knowledge that they may be hiding problems or biases.”