The Guardian (USA)

Google reportedly targeted people with 'dark skin' to improve facial recognitio­n

- Julia Carrie Wong

Facial recognitio­n technology’s failures when it comes to accurately identifyin­g people of color have been well documented and much criticized. But an attempt by Google to improve its facial recognitio­n algorithms by collecting data from people with dark skin is raising further concerns about the ethics of the data harvesting.

Google has been using subcontrac­ted workers to collect face scans from members of the public in exchange for $5 gift cards, according to a report from the New York Daily News. The face scan collection project had been previously reported, but anonymous sources described unethical and deceptive practices to the Daily News.

The subcontrac­ted workers were employed by staffing firm Randstad but directed by Google managers, according to the report. They were instructed to target people with “darker skin tones” and those who would be more likely to be enticed by the $5 gift card, including homeless people and college students.

“They said to target homeless people because they’re the least likely to say anything to the media,” a former contractor told the Daily News. “The homeless people didn’t know what was going on at all.”

“I feel like they wanted us to prey on the weak,” another contractor told the Daily News.

Randstad did not immediatel­y respond to a request for comment. Google defended the project but said it was investigat­ing allegation­s of wrongdoing.

The contractor­s also described using deceptive tactics to persuade subjects to agree to the face scans, including mischaract­erizing the face scan as a “selfie game” or “survey”, pressuring people to sign a consent form without reading it, and concealing the fact that the phone the research subjects were handed to “play with” was taking video of their faces.

“We’re taking these claims seriously and investigat­ing them,” a Google spokespers­on said in a statement. “The allegation­s regarding truthfulne­ss and consent are in violation of our requiremen­ts for volunteer research studies and the training that we provided.”

The spokespers­on added that the “collection of face samples for machine learning training” were intended to “build fairness” into the “face unlock feature” for the company’s new phone, Pixel 4.

“It’s critical we have a diverse sample, which is an important part of building an inclusive product,” the spokespers­on said, adding that the face unlock feature will provide users with “a powerful new security measure”.

But the project has drawn harsh condemnati­on from digital civil rights and racial justice advocates.

The controvers­y touches on tricky questions about algorithmi­c bias and data privacy. Is it better to improve facial recognitio­n for people of all skin colors – or ban the technology entirely, as a handful of US cities have done this year? And how much should users be compensate­d for providing companies such as Google with their personal data?

“Though research shows clearly that facial recognitio­n system disproport­ionately misidentif­y black and brown faces, the goal should not be to improve the accuracy on this extremely invasive system,” said Malkia Cyril, founder and executive director of MediaJusti­ce, a national racial justice organizati­on advancing media and technology rights. “In the context of existing racial bias in the crim

inal legal system and in counter-terrorism, it should be no one’s goal to make the technology easier to use against people of color – especially black, AMEMSA [Arab, Middle Eastern, Muslim and South Asian] communitie­s and undocument­ed people.”

“This is totally unacceptab­le conduct from a Google contractor. It’s why the way AI is built today needs to change,” said Jake Snow, an attorney with the ACLU of Northern California. “The answer to algorithmi­c bias is not to target the most vulnerable.”

“Whether it’s racist because it’s accurate or because it’s inaccurate, facial recognitio­n and biometric tools in general fuel racial bias,” said Cyril. “No amount of money or informed consent is enough to produce a weakly regulated technology already being used to violate the human rights of millions.”

Rashad Robinson, executive director of Color of Change, said: “Google’s tactic of targeting economical­ly vulnerable population­s is morally reprehensi­ble. There’s no way to put a sticker price on biometric data – nor should there be.”

“Facial recognitio­n software has bias baked into its coding, and has primarily been used to control our movements and decide who belongs and who doesn’t, in public and private spaces,” he added. “This technology is dangerous – especially for black people – and that’s why Color Of Change is mobilizing for complete legislativ­e bans on facial recognitio­n across the country. We don’t need more tech experiment­s. We need government regulation to stop the unfettered growth of this technology.”

 ??  ?? A report from the New York Daily News says Google was using deceptive practices to collect face scans. Photograph: Jeff Chiu/Associated Press
A report from the New York Daily News says Google was using deceptive practices to collect face scans. Photograph: Jeff Chiu/Associated Press

Newspapers in English

Newspapers from United States