Too many flaws in commonwealth’s facial recognition law
In Virginia and beyond, there is bipartisan understanding that facial recognition poses significant threats to privacy, civil rights and civil liberties. There is also bipartisan agreement that Virginia’s new facial recognition regulation law does not protect against these threats.
Specifically, there are three harms the new law does not address: extensive surveillance, issues of accuracy and bias, and poor performance as an investigative tool.
Surveillance
Facial recognition gives police the power to identify and track people over time without their knowledge. This threat of surveillance has chilling effects on people’s ability to exercise their First Amendment-protected right to free speech and assembly. It may also threaten their right against unreasonable search and seizure, as it allows police to track people’s movements over time without a search warrant.
Virginia’s new law is insufficient because it gives police overly broad ability to use facial recognition: on a wide range of people (not just suspects but potential witnesses and victims); and in a wide range of circumstances (if they have “reasonable suspicion” to believe an individual has committed a crime). The result is that many people will continue to be tracked by police with no court oversight.
Accuracy and bias
While some of the top-performing facial recognition algorithms have reduced issues with accuracy and bias, facial recognition’s reliability remains questionable because of the errors that arise from human implementation.
This law also does not ensure accurate, unbiased algorithms will be used. It sets a minimum accuracy threshold for “true positives” but doesn’t take into account “false positives” — falsely identifying someone — which, in the case of policing, could lead to misidentifications and wrongful arrests. Nor does it specify this threshold must be reached on “one to many” matching algorithms, the kind used in law enforcement investigations. Additionally, the law requires that an algorithm show “minimal performance variations across demographics” but provides no further information, rendering the provision meaningless and effectively allowing police to use a biased algorithm.
Nor does the law address the potential for cognitive biases to impact investigating officers during the search process. It contains unsubstantial language about “training” officers, but because facial recognition is so unregulated, there exists little to no accepted training in the first place, calling into question its efficacy as an investigative technique.
Poor investigative tool
There are no comprehensive studies evaluating how reliable facial recognition is in forensic investigations. Given humans’ cognitive biases and tendency to defer to recommendations made by a machine, it is likely that, even as an investigative technique, facial recognition suffers from unacknowledged shortcomings. On top of this, officers manipulate photos they run through facial recognition, using photo editing software to change lighting and pixelation, adding facial features from other faces, or using celebrity lookalikes in place of photos of actual suspects. Police say this questionable use should be allowed because facial recognition results only provide a lead for investigators. But there are no recognized standards — from expert bodies, case law or statutes — about what additional evidence is needed beyond the results of a facial recognition search to make an arrest. As a result, the technology has been used to form the sole or primary basis in multiple arrests, some of which were eventually ruled to be misidentifications.
This law does not provide any guidance about additional corroborating evidence needed beyond a facial recognition match. It provides no standards for the quality of photos that are used, which has a significant effect on accuracy. And it does not set any guidelines for photo pre-processing, meaning police are allowed to heavily manipulate the photos they run.
In short Virginia’s new law does not adequately address the risks facial recognition poses, including the proliferation of mass surveillance, discriminatory impacts, and issues of reliability as an investigative tool.