Houston Chronicle Sunday

Privacy activists alarmed by culled photos

- By Cade Metz

SAN FRANCISCO — Dozens of databases of people’s faces are being compiled without their knowledge by companies and researcher­s, with many of the images then being shared around the world, in what has become a vast ecosystem fueling the spread of facial recognitio­n technology.

The databases are pulled together with images from social networks, photo websites, dating services like OkCupid and cameras placed in restaurant­s and on college quads. While there is no precise count of the datasets, privacy activists have pinpointed repositori­es that were built by Microsoft, Stanford University and others, with one holding more than 10 million images while another had more than 2 million.

The facial compilatio­ns are being driven by the race to create leading-edge facial recognitio­n systems. This technology learns how to identify people by analyzing as many digital pictures as possible using “neural networks,” which are complex mathematic­al systems that require vast amounts of data to build pattern recognitio­n.

Tech giants like Facebook and Google have most likely amassed the largest face data sets, which they do not distribute, according to research papers. But other companies and universiti­es have widely shared their image troves with researcher­s, government­s and private enterprise­s in Australia, China, India, Singapore and Switzerlan­d for training artificial intelligen­ce, according to academics, activists and public papers.

Companies and labs have gathered facial images for more than a decade, and the databases are merely one layer to building facial recognitio­n technology. But people often have no idea that their faces ended up in them. And while names are typically not attached to the photos, individual­s can be recognized because each face is unique to a person.

Questions about the datasets are rising because the technologi­es that they have enabled are being used in potentiall­y invasive ways. Documents released last Sunday revealed that Immigratio­n and Customs Enforcemen­t officials employed facial recognitio­n technology to scan motorists’ photos to identify unauthoriz­ed immigrants. The FBI also spent more than a decade using such systems to compare driver’s license and visa photos against the faces of suspected criminals, according to a Government Accountabi­lity Office report last month. On Wednesday, a congressio­nal hearing tackled the government’s use of the technology.

There is no oversight of the datasets. Activists and others said they were angered by the possibilit­y that people’s likenesses had been used to build ethically questionab­le technology and that the images could be misused. At least one facial database created in the United States was shared with a company in China that has been linked to ethnic profiling of the country’s minority Uighur Muslims.

Over the past several weeks, some companies and universiti­es, including Microsoft and Stanford, removed their facial datasets from the internet because of privacy concerns. But given that the images were already so well distribute­d, they are most likely still being used in the United States and elsewhere, researcher­s and activists said.

“You come to see that these practices are intrusive, and you realize that these companies are not respectful of privacy,” said Liz O’Sullivan, who oversaw one of these databases at the artificial intelligen­ce startup Clarifai. She said she left the New York-based company in January to protest such practices.

“The more ubiquitous facial recognitio­n becomes, the more exposed we all are to being part of the process,” she said.

Google, Facebook and Microsoft declined to comment.

One database, which dates to 2014, was put together by researcher­s at Stanford. It was called Brainwash, after a San Francisco cafe of the same name, where the researcher­s tapped into a camera. Over three days, the camera took more than 10,000 images, which went into the database, the researcher­s wrote in a 2015 paper. The paper did not address whether cafe patrons knew their images were being taken and used for research. (The cafe has closed.)

The Stanford researcher­s then shared Brainwash. According to research papers, it was used in China by academics associated with the National University of Defense Technology and Megvii, an artificial intelligen­ce company that The New York Times previously reported has provided surveillan­ce technology for monitoring Uighurs.

The Brainwash dataset was removed from its original website last month after Adam Harvey, an activist in Germany who tracks the use of these repositori­es through a website called MegaPixels, drew attention to it. Links between Brainwash and papers describing work to build AI systems at the National University of Defense Technology in China have also been deleted, according to documentat­ion from Harvey.

Stanford researcher­s who oversaw Brainwash did not respond to requests for comment. “As part of the research process, Stanford routinely makes research documentat­ion and supporting materials available publicly,” a university official said. “Once research materials are made public, the university does not track their use nor did university officials.”

Matt Zeiler, founder and chief executive of Clarifai, the AI startup, said his company had built a facial database with images from OkCupid, a dating site. He said Clarifai had access to OkCupid’s photos because some of the dating site’s founders invested in his company.

An OkCupid spokeswoma­n said that Clarifai contacted the company in 2014 “about collaborat­ing to determine if they could build unbiased AI and facial recognitio­n technology” and that the dating site “did not enter into any commercial agreement then and have no relationsh­ip with them now.” She did not address whether Clarifai had gained access to OkCupid’s photos without its consent.

 ?? Megapixels via New York Times ?? A sample image from the Brainwash database, created by Stanford University researcher­s, shows patrons at a San Francisco cafe. The database and others like it are used to train facial recognitio­n software without the knowledge of the people photograph­ed.
Megapixels via New York Times A sample image from the Brainwash database, created by Stanford University researcher­s, shows patrons at a San Francisco cafe. The database and others like it are used to train facial recognitio­n software without the knowledge of the people photograph­ed.

Newspapers in English

Newspapers from United States