The Guardian (USA)

San Francisco was right to ban facial recognitio­n. Surveillan­ce is a real danger

- Veena Dubal is an associate Professor of Law at the University of California, Hastings Veena Dubal

San Francisco’s recent municipal ordinance banning the use of facial recognitio­n technology by city and county agencies has received internatio­nal attention. The first of its kind anywhere in the US, the law is a preemptive response to the proliferat­ion of a technology that the city of San Francisco does not yet deploy but which is already in use elsewhere. Since the passage of the ordinance, a debate has erupted in cities and states around the country: should other localities follow San Francisco’s example?

The answer is a resounding yes. The concerns that motivated the San Francisco ban are rooted not just in the potential inaccuracy of facial recognitio­n technology, but in a long national history of politicize­d and racially-biased state surveillan­ce.

Detractors who oppose the ordinance in the name of “public safety” acknowledg­e the technology’s current limitation­s (recent studies have shown that facial recognitio­n systems are alarmingly inaccurate in identifyin­g racial minorities, women, and transgende­r people). But they argue that as machine-learning becomes less biased the technology could actually upend human discrimina­tion. They — mainly corporate lobbyists and law enforcemen­t representa­tives — maintain that this absolute ban (rather than the limited regulation­s advocated by Big Tech) is a step backwards for public safety because it leaves surveillan­ce to people and not machines.

Based on my years of working as a civil rights advocate and attorney representi­ng Muslim Americans in the aftermath of September 11th, I recognize that the debate’s singular focus on the technology is a red herring. Even in an imaginary future where algorithmi­c discrimina­tion does not exist, facial recognitio­n software simply cannot de-bias the practice and

impact of state surveillan­ce. In fact, the public emphasis on curable algorithmi­c inaccuraci­es leaves the concerns that motivated the San Francisco ban historical­ly and politicall­y decontextu­alized.

This ordinance was crafted through the sustained advocacy of an intersecti­onal grassroots coalition driven not just by concerns about hi-tech dystopia, but by a long record of overbroad surveillan­ce and its deleteriou­s impacts on economical­ly and politicall­y marginaliz­ed communitie­s. As Matt Cagle, a leader in this coalition and an attorney at the ACLU of Northern California, told me, “The driving force behind this historic law was a coalition of 26 organizati­ons. Not coincident­ally, these Bay Area groups represente­d those who have been most harmed by local government profiling and surveillan­ce in our city: people of color, Muslim Americans, immigrants, the LGBTQ community, the unhoused, and more.”

Indeed, while San Francisco is known across the world as an “incubat [or] of dissent and individual liberties,” the local police department — like many across the United States — has a decades-long, little-known history of nefarious surveillan­ce activities.

A reported 83% of domestic intelligen­ce gathering for J Edgar Hoover’s notorious Counter Intelligen­ce Program (commonly known as Cointelpro) took place in the Bay Area — much of it at the hands of local police. From the 1950s well into the 1970s, the informatio­n gathered through this covert state program — which, when discovered, shocked the conscience of America — was used to infiltrate, discredit, and disrupt the now-celebrated civil rights movement.

After Cointelpro was congressio­nally disbanded and procedural safeguards put in place, community members in the 1980s and early 1990s learned that some San Francisco police officers continued to surreptiti­ously spy — without any evidence of criminal wrongdoing — on individual­s and groups based on their political activities. In at least one instance, informatio­n gathered by local police officers on law-abiding citizens was alleged to have been sold to foreign government­s.

Despite the subsequent passage of additional local procedural safeguards, which limited intelligen­ce-gathering on First-Amendment-protected activities to instances where reasonable suspicion of criminal activity could be articulate­d, in the years following September 11th, members of San Francisco’s Muslim American community again found themselves under unjust, non-criminally-predicated surveillan­ce.

These past and present chronicles of injustice highlight how face recognitio­n systems — like other surveillan­ce technology before it — can disproport­ionately harm people already historical­ly subject to profiling and abuse, including immigrants, people of color, political activists, and the formerly incarcerat­ed. And they demonstrat­e that even when legal procedures and oversight are thoughtful­ly put into place, these safeguards can both be rolled back (especially in times of hysteria) and violated.

As the debate about facial surveillan­ce technologi­es and “public safety” continues to rage, policy makers (and corporate decision-makers) should deliberate not just over the technology itself, but on these shameful political histories. In doing so, they should remember (or be reminded) that more informatio­n gathering — while certainly lucrative and occasional­ly comforting — does not always create safer communitie­s.

Even if face surveillan­ce is 100% neutral and devoid of discrimina­tory tendencies, humans will determine when and where the surveillan­ce takes place. Humans — with both implicit and explicit biases — will make the discretion­ary decisions about how to utilize the gathered data. And humans — often the most vulnerable — will be the ones disproport­ionately and unjustly impacted.

Amid the seemingly inevitable conquest of our everyday lives by new forms of technologi­cal surveillan­ce, San Francisco’s ban — and the diverse coalition-based movement that achieved it — proves that local democracy can still be leveraged to shift power- and decision-making into the hands of the people. The real, chilling histories and impacts of past surveillan­ce on freedom of associatio­n, religion, and speech — and not imagined fears about informatio­n collected through machine-learning systems — motivated the broad coalition of community groups to push for the San Francisco face surveillan­ce ban. Their example could — and should— spark a movement that spreads across the country.

 ?? Photograph: Ian Davidson/Alamy Stock Photo ?? The government’s embrace of facial recognitio­n technology has red flags all over it, argues Veena Dubal.
Photograph: Ian Davidson/Alamy Stock Photo The government’s embrace of facial recognitio­n technology has red flags all over it, argues Veena Dubal.

Newspapers in English

Newspapers from United States