The Guardian (USA)

TechScape: ‘Are you kidding, carjacking?’ – The problem with facial recognitio­n in policing

- Johana Bhuiyan

Porcha Woodruff was eight months pregnant when police in Detroit, Michigan came to arrest her on charges of carjacking and robbery. She was getting her two children ready for school when six police officers knocked on her door and presented her with an arrest warrant. She thought it was a prank.

“Are you kidding, carjacking? Do you see that I am eight months pregnant?” the lawsuit Woodruff filed against Detroit police reads. She sent her children upstairs to tell her fiance that “Mommy’s going to jail”.

She was detained and questioned for 11 hours and released on a $100,000 bond. She immediatel­y went to the hospital, where she was treated for dehydratio­n.

Woodruff later found out that she was the latest victim of false identifica­tion by facial recognitio­n. After her image was incorrectl­y matched to video footage of a woman at the gas station where the carjacking took place, her picture was shown to the victim in a photo lineup. According to the lawsuit, the victim allegedly chose Woodruff’s picture as the woman who was associated with the perpetrato­r of the robbery. Nowhere in the investigat­or’s report did it say the woman in the video footage was pregnant.

A month later the charges were dismissed due to insufficie­nt evidence.

Woodruff’s is the third known case of an arrest made due to false facial recognitio­n by the Detroit police department – and the sixth case in the US. All six people who were falsely arrested are Black. For years, privacy experts and advocates have raised the alarm about the inability of technology to properly identify people of colour and have warned of the privacy violations and dangers of a system that purports to identify anyone by their image or face. Still, law enforcemen­t and government agencies across the US and around the world continue to contract with various facial recognitio­n firms from Amazon’s Rekognitio­n to Clearview AI.

Countries including France, Germany,

China and Italy have used similar technology. In December, it was revealed that Chinese police had used mobile data and faces to track protestors. Earlier this year, French legislator­s passed a bill giving police the power to use AI in public spaces ahead of the Paris 2024 Olympics, making it the first country in the EU to approve the use of AI surveillan­ce (though it forbid the use of real-time facial recognitio­n). And last year, Wired reported on controvers­ial proposals to let police forces in the EU share photo databases that include images of people’s faces – described by one civil rights policy advisor as “the most extensive biometric surveillan­ce infrastruc­ture that I think we will ever have seen in the world”.

Back in Detroit, Woodruff’s lawsuit has sparked renewed calls in the US for total bans on police and law enforcemen­t use of facial recognitio­n. The Detroit police have rolled out new limitation­s on the use of facial recognitio­n in the days since the lawsuit was filed, including prohibitin­g the use of facial recognitio­n images in a lineup and requiring a detective not involved in the case to handle showing the images to the person being asked to identify a person. But activists say that’s not enough.

“The only policy that will prevent false facial recognitio­n arrests is a complete ban,” said Albert Fox Cahn of the nonprofit Surveillan­ce Technology Oversight Project. “Sadly, for every facial recognitio­n mistake we know about, there are probably dozens of Americans who remained wrongly accused and never get justice. These racist, error-prone systems simply have no place in a just society.”

As government­s around the world grapple with generative AI, the longrecord­ed harms of existing AI use, such as those in surveillan­ce systems, are often glossed over or entirely left out of conversati­on. Even in the case of the EU AI Act, which was introduced with several clauses proposing limitation­s on high-risk uses of AI like facial recognitio­n, some experts say the hype around generative AI has partly distracted from those discussion­s. “We were quite lucky that we put a lot of these things on the agenda before this AI hype and generative AI, ChatGPT boom happened,” Sarah Chander, a senior policy adviser at the internatio­nal advocacy organisati­on European Digital Rights, told me in June. “I think ChatGPT muddies the water very much in terms of the types of harms we’re actually talking about here.”

Much like other forms of AI-based systems, facial recognitio­n is only as good as the data that is fed into it, and as such often reflects and perpetuate­s the biases of those building them – a problem, as Amnesty Internatio­nal has noted, because images used to train such systems are predominan­tly of white faces. Facial recognitio­n systems have the poorest accuracy rates when it comes to identifyin­g people who are Black, female and between the ages of 18 to 30, while false positives “exist broadly”, according to a study by the National Institute of Standards

and Technology. In 2017, NIST examined 140 face recognitio­n algorithms and found that “false positive rates are highest in west and east African and east Asian people, and lowest in eastern European individual­s. This effect is generally large, with a factor of 100 more false positives between countries.”

But even if facial recognitio­n technology were exactly accurate – it wouldn’t be safer, critics argue. Civil liberties groups say the technology can potentiall­y create a vast and boundless surveillan­ce network that breaks down any semblance of privacy in public spaces. People can be identified wherever they go, even if those locations are where they are practicing constituti­onally protected behaviour like protests and religious centres. In the aftermath of the US supreme court’s reversal of federal abortion protection­s, it is newly dangerous for those seeking reproducti­ve care. Some facial recognitio­n systems, like Clearview AI, also use images scraped from the internet without consent. So social media images, profession­al headshots and any other photos that live on public digital spaces can be used to train facial recognitio­n systems that are in turn used to criminalis­e people. Clearview has been banned in several European countries including Italy and Germany and is banned from selling facial recognitio­n data to private companies in the US.

As for Woodruff, she is seeking financial damages. Detroit police chief James E White said the department was reviewing the lawsuit and that it was “very concerning”.

“I don’t feel like anyone should have to go through something like this, being falsely accused,” Woodruff told the Washington Post. “We all look like someone.”

The week in AI

Meet the artists reclaiming AI from big tech – with the help of cats, bees and drag queens (above).

AI hysteria is a distractio­n, argues data scientist Odanga Madung, becausealg­orithms already sow disinforma­tion in Africa

In the US, tsunami of AI misinforma­tion will shape next year’s knifeedge elections, writes John Naughton

Data engineer and tech strategist Afua Bruce says AI can be a force for good or ill in society, so everyone must shape it – not just the “tech guys”

Perhaps it’s a bit early to use an AI meal planner. One supermarke­t app suggested recipes for … chlorine gas and poison-bread sandwiches.

Nice recommends use of AI in NHS radiothera­py treatment in England…

While scientists hail a breakthrou­gh in tracking British wildlife.

Meanwhile, in publishing news, Amazon removed books generated by AI misattribu­ted to authors, and Google believes AI systems should be able to mine publishers’ work unless companies opt out.

The wider TechScape

Lifestyle and shopping apps are the latest weapons in Beijing’s informatio­n battle against Taiwan, as China uses apps to woo Taiwan’s teenagers.

Alas, we won’t be witness to a cage fight between Elon Musk and Mark Zuckerberg as the Meta chief has withdrawn his interest, claiming it is “time to move on”. Shame.

Rip Twitter. Fancy a piece of the rebranded platform? In September, Musk is auctioning­Twitter memorabili­a and furniture – from a painting of Bradley Cooper to a bird logo sign still affixed to X’s San Francisco offices.

Joe Biden has restricted US investment­s in the Chinese tech sector and the UK is considerin­g the same.

Amazon has joined Zoom and ordered employees back to the office.

ICYMI: Adrian Hon writes in Pushing Buttons about his visit to the Galactic Starcruise­r (above) and found that the closure of Disney’s Star Wars hotel isn’t the end of immersive gaming.

 ?? John Lund/Getty Images/Blend Images ?? ‘The only policy that will prevent false facial recognitio­n arrests is a complete ban.’ Photograph:
John Lund/Getty Images/Blend Images ‘The only policy that will prevent false facial recognitio­n arrests is a complete ban.’ Photograph:
 ?? ?? Porcha Woodruff, who was falsely arrested, in early August. Photograph: Carlos Osorio/AP
Porcha Woodruff, who was falsely arrested, in early August. Photograph: Carlos Osorio/AP

Newspapers in English

Newspapers from United States