Press-Telegram (Long Beach)

SHOULD FACIAL RECOGNITIO­N TECHNOLOGI­ES BE BANNED?

Facial recognitio­n technology is a threat to civil liberties Facial recognitio­n technology is a useful tool for law enforcemen­t

- By Jennifer Lynch Jennifer Lynch is surveillan­ce litigation director at the Electronic Frontier Foundation, an internatio­nal digital civil liberties organizati­on based in San Francisco. By Hoan Ton-That Hoan Ton-That is co-founder and CEO of Clearview AI.

Facial recognitio­n surveillan­ce is a growing threat to constituti­onally protected free speech, privacy, racial justice, and informatio­n security. We need strict federal, state and local laws to ban law enforcemen­t's use of it.

Companies like Clearview AI provide facial recognitio­n services to police with little to no public oversight. Clearview claims to have amassed a data set of more than 20 billion images by scraping millions of websites, including news media and sites like Facebook, YouTube, and Venmo without ever seeking users' consent.

The company can even scan for faces in videos on social media sites, not just static photos. Using Clearview's services, police can identify people in photograph­s and videos and learn significan­t, highly personal informatio­n about them.

Police abuse of this technology is not just theoretica­l: it's already happening. Law enforcemen­t already has used facial recognitio­n on public streets and at political demonstrat­ions to surveil protestors' First Amendment-protected activities; for example, police in Baltimore and Miami used the technology to identify participan­ts in Blackled demonstrat­ions against police violence.

Florida agencies used facial recognitio­n thousands of times to try to identify unknown suspects without ever informing those suspects or their attorneys about the practice. And the Los Angeles Police Department in 2020 barred officers and detectives from using outside facial recognitio­n platforms in their investigat­ions after discoverin­g a handful of detectives had used Clearview without permission.

Law enforcemen­t agencies often argue they must have access to new technology — no matter how invasive to people's privacy — to help solve the most heinous crimes. Clearview itself has said it “exists to help law enforcemen­t agencies solve the toughest cases.”

But police already use this technology for minor crimes. Officers in Clifton, N.J., used Clearview to identify shoplifter­s and a good Samaritan, and a lieutenant in Green Bay, Wisconsin, told a colleague to “feel free to run wild with your searches,” including using the technology on family and friends. Officers from coast to coast used it without department­al knowledge or oversight.

Widespread use of facial recognitio­n technology by the government, especially to identify people secretly when they're out in public, will fundamenta­lly change our society by chilling and deterring people from exercising their First Amendment rights to speak, assemble, and associate with others. Countless studies have shown that when people think the government is watching them, they alter their behavior to avoid scrutiny — a burden falling disproport­ionately upon communitie­s of color, immigrants, religious minorities, and other marginaliz­ed groups.

The right to speak anonymousl­y and associate with others without the government watching is fundamenta­l to a democracy. It's not just civil liberties groups like the Electronic Frontier Foundation (EFF) that say so: America's Founding Fathers used pseudonyms in the Federalist Papers to debate what kind of government we should form. And the Supreme Court consistent­ly has recognized that anonymous speech and associatio­n are necessary for the First Amendment right to free speech to be at all meaningful.

Clearview has expanded its reach to the battlefiel­d, making headlines by providing its service free of charge to Ukraine. But introducin­g such technology into the life-or-death realm of a war zone could have significan­t negative consequenc­es by providing a means for psychologi­cal warfare and leading to future abuses that could spiral out of control.

U.S. communitie­s are pushing back. Localities across the nation as well as states including California, Washington, Massachuse­tts and New York have enacted bans or moratoria on at least some of the most egregious government uses of facial recognitio­n. And congressio­nal hearings have revealed bipartisan objection to carte blanche use of facial recognitio­n by the police.

EFF continues to support legislatio­n and litigation to curb use of facial recognitio­n. Without an official moratorium or ban in place, police use of this technology is likely to expand throughout our communitie­s.

It's time to reach out to city councilmem­bers, county supervisor­s, and state and federal legislator­s to demand meaningful restrictio­ns on police use of facial recognitio­n. We need to stop the government from using and abusing this technology to chill our cherished freedoms before it's too late.

Senate Bill introduced by Sen. Steven Bradford, D-San Pedro, which would indefinite­ly ban incorporat­ing facial recognitio­n technology into body camera systems

What if law enforcemen­t had the ability, using proven and highly accurate technology, to identify potential suspects after a crime was committed or when someone, like a child, goes missing?

If we were referring to the applicatio­n of DNA evidence in crime solving, technology widely used for decades by law enforcemen­t, it would be a nobrainer.

By contrast, law enforcemen­t's use of facial recognitio­n technology (FRT), although it serves many of the same public safety purposes in similar applicatio­ns as forensic DNA evidence — including helping to exonerate the innocent as well as convicting the guilty — is met with largely unfounded skepticism.

Recent high-profile examples highlight how facial recognitio­n technology serves the public good.

When rioters stormed the United States Capitol to prevent Congress from certifying the 2020 election, facial recognitio­n technology helped — using publicly available images — to reveal the potential identities of individual­s involved in the mayhem.

Following Russia's unprovoked attack, the Ukrainian government effectivel­y used FRT to identify deceased Russian soldiers, investigat­e the war crimes in Bucha and help with family reunificat­ion and refugee resettleme­nt — all from publicly available informatio­n.

Of course, most practical applicatio­ns of FRT in law enforcemen­t scenarios would be less flashy though nonetheles­s crucial to the administra­tion of justice, such as helping identify child predators, resolving cold case felonies and prosecutin­g financial crimes.

Despite the obvious benefits, several state and local government entities are either radically restrictin­g or banning use of facial recognitio­n technology, largely due to misconcept­ions on how FRT actually works.

Cutting edge companies like the one I founded, Clearview AI, recognize that FRT must be used for the best and highest purposes while proactivel­y limiting any potential downsides.

We have developed the top-rated facial recognitio­n technology in the U.S., as verified by the National Institute for Standards and Technology. Our algorithm can pick the correct person out of a lineup of 12 million photos with a staggering 99.85% accuracy rate, and works with substantia­lly equal effectiven­ess regardless of race, age, gender or other demographi­c features.

Our image repository consists of public data that can be obtained by a typical Google search, sourced from news media, mugshot, public social media and other open sources.

This means that if the content of a social media post is made in private mode, it won't appear in search results.

Clearview AI's facial recognitio­n database is only available to government agencies who may only use the technology to assist in the course of law enforcemen­t investigat­ions or in connection with national security. And law enforcemen­t's use of FRT is not “real-time surveillan­ce,” which is defined as the live monitoring of behavior.

We believe that FRT can be deployed in a way that protects fundamenta­l freedoms and human rights, when used in an after the crime manner. In fact, accurate FRT can make police descriptio­ns such as “a six-foot-one, African-American male” a thing of the past, creating a world with fewer unnecessar­y police interactio­ns. Additional­ly, it promises to help overturn wrongful conviction­s, prevent discrimina­tion, exonerate the innocent and eliminate the police lineup.

We encourage government entities to adopt common sense legislatio­n and/or regulation, and best use practices.

Law enforcemen­t agencies should make their facial recognitio­n policies public, outlining the use cases, situations and types of crime for which they will use facial recognitio­n.

Effective training protocols must be establishe­d and any facial recognitio­n technology system must have an administra­tor to manage access and oversee the technology's use.

The system must have effective reporting tools that generate usage reports and audits.

Finally, users should never rely on the results of a facial recognitio­n technology search as the sole means of identifyin­g a suspect — each possible match must be confirmed by independen­t, corroborat­ing informatio­n.

We're now in the 21st century's third decade. Facial recognitio­n technology should be leveraged for good. With proper purpose, restraint and regulation, it can help solve crimes, aid victims and ultimately make the world a safer place.

 ?? ERIC RISBERG — THE ASSOCIATED PRESS ?? This photo taken Tuesday, May 7, 2019, shows a security camera in the Financial District of San Francisco. San Francisco became the first U.S. city to ban the use of facial recognitio­n by police and other city agencies as the technology creeps increasing­ly into daily life.
ERIC RISBERG — THE ASSOCIATED PRESS This photo taken Tuesday, May 7, 2019, shows a security camera in the Financial District of San Francisco. San Francisco became the first U.S. city to ban the use of facial recognitio­n by police and other city agencies as the technology creeps increasing­ly into daily life.
 ?? AMR ALFIKY — THE NEW YORK TIMES ?? Hoan Ton-That, the chief executive of Clearview AI, uses the Clearview smart phone applicatio­n in New York on Jan. 10, 2020. Researcher­s at the University of Chicago want you to be able to post selfies without worrying that the next Clearview AI will use them to identify you.
AMR ALFIKY — THE NEW YORK TIMES Hoan Ton-That, the chief executive of Clearview AI, uses the Clearview smart phone applicatio­n in New York on Jan. 10, 2020. Researcher­s at the University of Chicago want you to be able to post selfies without worrying that the next Clearview AI will use them to identify you.

Newspapers in English

Newspapers from United States