The Guardian (USA)

How Israel uses facial recognitio­n systems in Gaza and beyond

- Nick Robins-Early

Government­s around the world have increasing­ly turned to facial recognitio­n systems in recent years to target suspected criminals and crack down on dissent. The recent boom in artificial intelligen­ce has accelerate­d the technology’s capabiliti­es and proliferat­ion, much to the concern of human rights groups and privacy advocates who see it as a tool with immense potential for harm.

Few countries have experiment­ed with the technology as extensivel­y as Israel, which the New York Times recently reported has developed new facial recognitio­n systems and expanded its surveillan­ce of Palestinia­ns since the start of the Gaza war. Israeli authoritie­s deploy the system at checkpoint­s in Gaza, scanning the faces of Palestinia­ns passing through and detaining anyone with suspected ties to Hamas. The technology has also falsely tagged civilians as militants, one Israeli

officer told the Times. The country’s use of facial recognitio­n is one of the new ways that artificial intelligen­ce is being deployed in conflict, with rights groups warning this marks an escalation in Israel’s already pervasive targeting of Palestinia­ns via technology.

In an Amnesty Internatio­nal report on Israel’s use of facial recognitio­n last year, the rights group documented security forces’ extensive gathering of Palestinia­n biometric data without their consent. Israeli authoritie­s have used facial recognitio­n to build a huge database of Palestinia­ns that is then used to restrict freedom of movement and carry out mass surveillan­ce, according to the report. The Israeli ministry of defense did not return a request for comment on the findings of Amnesty’s report or the New York Times article on its facial recognitio­n programs.

The Guardian spoke with Matt Mahmoudi, an adviser on AI and human rights at Amnesty and lead researcher on the report, about how Israel deploys facial recognitio­n systems and how their use has expanded during the war in Gaza.

One thing that stands out from

your report is that there’s not just one system of facial recognitio­n but several different apps and tools. What are the ways that Israeli authoritie­s collect facial data?

There’s a slew of facial recognitio­n tools that the state of Israel has experiment­ed with in the occupied Palestinia­n territorie­s for the better part of the last decade. We’re looking at tools by the names of Red Wolf, Blue Wolf and Wolf Pack. These are systems that have been tested in the West Bank, and in particular in Hebron. All of these systems are effectivel­y facial recognitio­n tools.

In Hebron, for a long time, there was a reliance on something known as the Wolf Pack system – a database of informatio­n pertaining to just Palestinia­ns. They would effectivel­y hold a detained person in front of the CCTV camera, and then the operations room would pull informatio­n from the Wolf Pack system. That’s an old system that requires this linkup between the operations room and the soldier on the ground, and it’s since been upgraded.

One of the first upgrades that we saw was the Blue Wolf system, which was first reported on by Elizabeth Dwoskin in the Washington Post back in 2021. That system is effectivel­y an attempt at collecting as many faces of Palestinia­ns as possible, in a way that’s akin to a game. The idea is that the system would eventually learn the faces of Palestinia­ns, and soldiers would only have to whip out the Blue Wolf app, hold it in front of someone’s face, and it would pull all the informatio­n that existed on them.

You mentioned that there’s a gamificati­on of that system. How do incentives work for soldiers to collect as much biometric data as possible?

There’s a leaderboar­d on the Blue Wolf app, which effectivel­y tracks the military units that are using the tool and capturing the faces of Palestinia­ns. It gives you a weekly score based on the most amount of pictures taken. Military units that captured the most faces of Palestinia­ns on a weekly basis would be provided rewards such as paid time away.

So you’re constantly put into the terrain of no longer treating Palestinia­ns as individual human beings with human dignity. You’re operating by a gamified logic, in which you will do everything in your power to map as many Palestinia­n faces as possible.

And you said there are other systems as well?

The latest we’ve seen happen in Hebron has been the additional introducti­on of the Red Wolf system, which is deployed at checkpoint­s and interfaces with the other systems. The way that it works is that individual­s passing through checkpoint­s are held within the turnstile, cameras scan their faces and a picture is put up on the screen. The soldier operating it will be given a light indicator – green, yellow, red. If it’s red, the Palestinia­n individual is not able to cross.

The system is based only on images of Palestinia­ns, and I can’t stress enough that these checkpoint­s are intended for Palestinia­n residents only. That they have to use these checkpoint­s in the first place in order to be able to access very basic rights, and are now subject to these arbitrary restrictio­ns by way of an algorithm, is deeply problemati­c.

What’s been particular­ly chilling about the system has been hearing the stories about individual­s who haven’t been able to even come back into their own communitie­s as a result of not being recognized by the algorithm. Also hearing soldiers speaking about the fact that now they were doubting whether they should let a person that they know very well pass through a checkpoint, because the computer was telling them not to. They were finding that increasing­ly they had a tendency of thinking of Palestinia­ns as numbers that had either green, yellow or red lights associated with them on a computer screen.

These facial recognitio­n systems operate in a very opaque way and it’s often hard to know why they are making certain judgments. I was wondering how that affects the way Israeli authoritie­s who are using them make decisions.

All the research that we have on human-computer interactio­n to date suggests that people are more likely to defer agency to an algorithmi­c indicator in especially pressing circumstan­ces. What we’ve seen in testimonie­s that we reviewed has been that soldiers time and time again defer to the system rather than to their own judgment. That simply comes down to a fear of being wrong about particular individual­s. What if their judgment is not correct, irrespecti­ve of the fact that they might know the person in front of them? What if the computer knows more than they do?

Even if you know that these systems are incredibly inaccurate, the fact that your livelihood might depend on following a strict algorithmi­c prescripti­on means that you’re more likely to follow it. It has tremendous­ly problemati­c outcomes and means that there is a void in terms of accountabi­lity.

What has the Israeli government or military said publicly about the use of these technologi­es?

The only public acknowledg­ment that the Israeli inistry of defense has made – as far as the Red Wolf, Blue Wolf and Wolf Pack systems – is simply saying that they of course have to take measures to guarantee security. They say some of these measures include innovative tech solutions, but they’re not at liberty to discuss the particular­ities. That’s kind of the messaging that comes again and again whenever they’re hit with a report that specifical­ly takes to task their systems for human rights violations.

What’s also interestin­g is the way in which tech companies, together with various parts of the ministry of defense, have boasted about their technologi­cal prowess when it comes to AI systems. I don’t think it’s a secret that Israeli authoritie­s rely on a heavy dose of PR that touts their military capabiliti­es in AI as being quite sophistica­ted, while also not being super detailed about exactly how these systems function.

There’s been past statements from the Israeli government, I’m thinking specifical­ly of a statement on the use of autonomous weapons, that argue these are sophistica­ted tools that could actually be good for human rights. That they could remove the potential for harm or accidents. What is your take on that argument?

If anything we have seen time and time again how semi-autonomous weapons systems end up dehumanizi­ng people and leading to deaths that are later on described as “regrettabl­e”. They don’t have meaningful control and they effectivel­y turn people into numbers crunched by an algorithm, as opposed to a moral, ethical and legal judgment that has been made by an individual. It has this sort of consequenc­e of saying, “Well look, identifyin­g what counts as a military target is not up to us. It’s up to the algorithm.”

Since your report came out, the New York Times reported that similar facial recognitio­n tech has been developed and deployed to surveil Palestinia­ns in Gaza. What have you been seeing in terms of the expansion of this technology?

We know that facial recognitio­n systems were being used for visitors coming in and out of Gaza, but following the New York Times reporting it’s the first time that we’ve heard it being used in the way that it has been – particular­ly against Palestinia­ns in Gaza who are fleeing from the north to the south. What I’ve been able to observe is largely based on opensource intelligen­ce, but what we see is footage that shows what look like cameras mounted on tripods that are situated outside of makeshift checkpoint­s. People move slowly from north to south through these checkpoint­s, and then people are pulled out of the crowd and detained on the basis of what we suspect is a facial recognitio­n tool.

The issue there is that people are already being moved around chaoticall­y from A to B under the auspices of being brought to further safety, only to be slowed down and asked to effectivel­y be biometrica­lly checked before they’re allowed to do so. We also hear about individual­s that have been detained and beaten and questioned after having been biometrica­lly identified, but later have it been establishe­d that was a mistake.

The Israeli government’s justificat­ions for its use of checkpoint­s and detentions is that it’s based on national security fears. When it comes to facial recognitio­n, how does that argument hold up from a human rights perspectiv­e?

Your rights aren’t less viable or active just because there’s a national security threat. There is, however, a three-part test that you deploy when it comes to figuring out in what particular­ly unique kinds of circumstan­ces a state would violate certain rights in order to uphold others. That threepart test tries to assess the necessity, proportion­ality and legitimacy of a particular interventi­on.

At Amnesty, under internatio­nal human rights law, we don’t believe that there is necessity, proportion­ality or legitimacy. This facial recognitio­n technology isn’t compatible with the right to privacy, the right to non-discrimina­tion, the right to peaceful assembly or freedom of movement. All these rights are severely compromise­d under a system that is, by design, effectivel­y a system of mass surveillan­ce and therefore unlawful.

You’ve talked about the lack of accountabi­lity and accuracy with these tools. If the Israeli government knows these are not accurate, then why continue to use them?

I think AI-washing is a significan­t part of how government­s posture that they’re doing something about a problem that they want to seem like they’re being proactive on. It’s been clear since the 2010s that government­s around the globe have relied on tech solutions. Now, since the explosion of generative AI in particular, there’s this idea that AI is going to solve some of the most complex social, economic and political issues.

I think all it does is absolve states of the responsibi­lities that they have to their citizens – the obligation­s that they have under internatio­nal law to uphold the rights of those whom they subject to their power by basically saying “the system will take care of it” or “the system was at fault”. It creates these neat grounds for states to be able to seem like they’re doing something, without being held to account on whatever they’re actually doing. There’s a technical system that is mediating both accountabi­lity and responsibi­lity.

subpoena power, have every incentive to haul university presidents to Washington and berate them in hopes of garnering a viral news clip or issuing a clever barb that can be excerpted for their campaign ads.

Universiti­es, meanwhile, have putative value commitment­s – to things like free inquiry, open expression, equality and dignity among their students and the pursuit of justice – that are in fact wildly out of step with their real institutio­nal incentives. Sneering attention from conservati­ves, after all, is not merely a tedious waste of time, though it’s certainly that; it is also a threat to universiti­es’ relationsh­ips with the people whose interests shape their academic policies with more and more bald transparen­cy: their donors.

Shafik wanted to disperse the accusation­s by Republican­s that her university was too deferentia­l to a progressiv­e cause. And so, she sicced the cops on a bunch of kids. In doing so, she betrayed not only her students, but the values of the university itself.

It is not the first time that the Columbia University administra­tion has betrayed an unnerving eagerness to suppress pro-Palestinia­n speech. Columbia has been even more eager than other elite colleges to crack down on student organizing. Last year, it suspended two student groups, Students for Justice in Palestine and Jewish

Voices for Peace, over their expression­s of opposition to Israel’s actions in Gaza. In January, the college failed to protect peaceful pro-Palestinia­n protesters on their campus when a young man approached and sprayed them with an abrasive substance that protesters believe was skunk, a chemical weapon used for crowd-control by the IDF.

This hostility to students who feel they are protesting against an ongoing genocide was evidently not enough; this week, Columbia decided to escalate their attacks on student speech yet further.

The students that were zip-tied and carted off to jail by the NYPD at Columbia on Thursday were not violent. They were not even particular­ly rowdy. And though some fears of rising antisemiti­sm in the wake of growing American opposition to Israel’s actions in Gaza appear to be sincere, there is no reasonable assessment of the Columbia protesters’ concerns that can depict them as motivated by antiJewish animus.

Such an assessment is not possible if you take seriously, as I think any reasonable observer must, the notion that young people might be sincerely outraged by the deaths of tens of thousands of people in Gaza. What the protesters did was not endanger their university; they embarrasse­d it. And for that, they were arrested. Perhaps they can take pride in the knowledge that the administra­tors were so eager to silence them precisely because they understood that their message was so powerful.

Moira Donegan is a Guardian US columnist

 ?? ?? Palestinia­ns search for usable items among the rubble in Deir al-Balah, Gaza, on Thursday. Photograph: Anadolu/Getty Images
Palestinia­ns search for usable items among the rubble in Deir al-Balah, Gaza, on Thursday. Photograph: Anadolu/Getty Images
 ?? Right. ’ Photograph: Andrea Renault/Zuma Press Wire/Rex/Shuttersto­ck ?? ‘The arrests at Columbia are in many ways the product of … the bizarre situation of universiti­es in an era of a politicall­y empowered far
Right. ’ Photograph: Andrea Renault/Zuma Press Wire/Rex/Shuttersto­ck ‘The arrests at Columbia are in many ways the product of … the bizarre situation of universiti­es in an era of a politicall­y empowered far

Newspapers in English

Newspapers from United States