Sunday Times (Sri Lanka)

When AI decides who lives and dies

The Israeli military’s algorithmi­c targeting has created dangerous new precedents

- (Simon Frankel Pratt is a lecturer in political science at the School of Social and Political Sciences, University of Melbourne) (Excerpts from an article that appeared in foreignpol­icy.com)

Investigat­ive journalism published in April by Israeli media outlet Local Call (and its English version, +972 Magazine) shows that the Israeli military has establishe­d a mass assassinat­ion programme of unpreceden­ted size, blending algorithmi­c targeting with a high tolerance for bystander deaths and injuries.

The investigat­ion reveals a huge expansion of Israel’s previous targeted killing practices, and it goes a long way toward explaining how and why the Israel Defence Forces (IDF) could kill so many Palestinia­ns while still claiming to adhere to internatio­nal humanitari­an law. It also represents a dangerous new horizon in human-machine interactio­n in conflict—a trend that’s not limited to Israel.

Israel has a long history of using targeted killings. During the violent years of the Second Intifada (2000-2005), it became institutio­nalised as a military practice, but operations were relatively infrequent and often involved the use of special munitions or strikes that targeted only people in vehicles to limit damage to bystanders.

But since the Hamas attack on Oct. 7, 2023, the IDF has shifted gears. It has discarded the old process of careful target selection of mid-to-high-ranking militant commanders. Instead, it has built on ongoing advancemen­ts in artificial intelligen­ce (AI) tools, including for locating targets. The new system automatica­lly sifts through huge amounts of raw data to identify probable targets and hand their names to human analysts to do with what they will—and in most cases, it seems, those human analysts recommend an airstrike.

The new process, according to the investigat­ion by Local Call and +972 Magazine, works like this: An AI-driven system called Lavender has tracked the names of nearly every person in Gaza, and it combines a wide range of intelligen­ce inputs—from video feeds and intercepte­d chat messages to social media data and simple social network analysis—to assess the probabilit­y that an individual is a combatant for Hamas or another Palestinia­n militant group. It was up to the IDF to determine the rate of error that it was willing to tolerate in accepting targets flagged by Lavender, and for much of the war, that threshold has apparently been 10 percent.

Targets that met or exceeded that threshold would be passed on to operations teams after a human analyst spent an estimated 20 seconds to review them. Often this involved only checking whether a given name was that of a man (on the assumption that women are not combatants). Strikes on the 10 percent of false positives—comprising, for example, people with similar names to Hamas members or those sharing phones

with family members identified as Hamas members—were deemed an acceptable error under wartime conditions.

A second system, called Where’s Dad, determines whether targets are at their homes. Local Call reported that the IDF prefers to strike targets at their homes because it is much easier to find them there than it is while they engage the IDF in battle. The families and neighbours of those possible Hamas members are viewed as insignific­ant collateral damage, and many of these strikes have so far been directed at what one of the Israeli intelligen­ce officers interviewe­d called “unimportan­t people”—junior Hamas members who are seen as legitimate targets because they are combatants but not of great strategic significan­ce. This appears to have especially been the case during the early crescendo of bombardmen­t at the outset of the war, after which the focus shifted towards somewhat more senior targets “so as not to waste bombs”.

One lesson from this revelation addresses the question of whether Israel’s tactics in Gaza are genocidal. Genocidal acts can include efforts to bring about mass death through deliberate­ly induced famine or the wholesale destructio­n of the infrastruc­ture necessary to support future community life, and some observers have claimed that both are evident in Gaza. But the clearest example of genocidal conduct is opening fire on civilians with the intention of wiping them out en masse. Despite evident incitement to genocide by Israeli officials not linked to the IDF’s chain of command, the way that the IDF has selected and

struck targets has remained opaque.

Local Call and +972 Magazine have shown that the IDF may be criminally negligent in its willingnes­s to strike targets when the risk of bystanders dying is very high, but because the targets selected by Lavender are ostensibly combatants, the IDF’s airstrikes are not intended to exterminat­e a civilian population. They have followed the so-called operationa­l logic of targeted killing even if their execution has resembled saturation bombing in its effects.

This matters to experts in internatio­nal law and military ethics because of the doctrine of double effect, which permits foreseeabl­e but unintended harms if the intended act does not depend on those harms occurring, such as in the case of an airstrike against a legitimate target that would happen whether or not there were bystanders. But in the case of the Israel-Hamas war, most lawyers and ethicists—and apparently some number of IDF officers—see these strikes as failing to meet any reasonable standard of proportion­ality while stretching the notion of discrimina­tion beyond reasonable interpreta­tions. In other words, they may still be war crimes.

Scholars and practition­ers have discussed “human-machine teaming” as a way to conceptual­ise the growing centrality of interactio­n between AI-powered systems and their operators during military actions. Rather than autonomous “killer robots,” human-machine teaming envisions the next generation of combatants to be systems that distribute agency between human and machine decision-makers. What emerges is not The Terminator, but a constellat­ion of tools brought together by algorithms and placed in the hands of people who still exercise judgment on their use.

Algorithmi­c targeting is in widespread use in the Chinese province of Xinjiang, where the Chinese government employs something similar as a means of identifyin­g suspected dissidents among the Uyghur population. In both Xinjiang and the occupied Palestinia­n territorie­s, the algorithms that incriminat­e individual­s depend on a wealth of data inputs that are unavailabl­e outside of zones saturated with sensors and subject to massive collection efforts.

Israel’s use of Lavender, Where’s Dad, and other previously exposed algorithmi­c targeting systems—such as the Gospel— shows how human-machine teaming can become a recipe for strategic and moral disaster. Local Call and +972 published testimonie­s from a range of intelligen­ce officers suggesting growing discomfort, at all levels of the IDF’s chain of command, with the readiness of commanders to strike targets with no apparent regard to bystanders.

Israel’s policies violate emerging norms of responsibl­e AI use. They mix an emotional atmosphere of emergency and fury within the IDF, a deteriorat­ion in operationa­l discipline, and a readiness to outsource regulatory compliance to a machine in the name of efficiency. Together, these factors show how an algorithmi­c system has the potential to become an “unaccounta­bility machine,” allowing the IDF to transform military norms not through any specific set of decisions, but by systematic­ally attributin­g new, unrestrain­ed actions to a seemingly objective computer.

How did this happen? Israel’s political leadership assigned the IDF an impossible goal: the total destructio­n of Hamas. At the outset of the war, Hamas had an estimated 30,000 to 40,000 fighters. After almost two decades of control in the Gaza Strip, Hamas was everywhere. On Oct. 7, Hamas fighters posed a terrible threat to any IDF ground force entering Gaza unless their numbers could be depleted and their battalions scattered or forced undergroun­d.

The fact that Lavender could generate a nearly endless list of targets—and that other supporting systems could link them to buildings that could be struck rapidly from the air and recommend appropriat­e munitions—gave the IDF an apparent means of clearing the way for an eventual ground operation. Nearly half of reported

Palestinia­n fatalities occurred during the initial six weeks of heavy bombing. Human-machine teaming, in this case, produced a replicable tactical solution to a strategic problem.

The IDF overcame the main obstacle to this so-called solution,—the vast number of innocent civilians densely packed into the small territory of the Gaza Strip—by simply deciding not to care all that much whom it killed alongside its targets. In strikes against senior Hamas commanders, according to the Local Call and +972 investigat­ion, those interviewe­d said the IDF decided it was permissibl­e to kill as many as “hundreds” of bystanders for each commander killed; for junior Hamas fighters, that accepted number began at 15 bystanders but shifted slightly down and up during various phases of fighting.

Moreover, as targets were frequently struck in homes where unknown numbers of people were sheltering, entire families were wiped out. These family annihilati­ons likely grew as additional relatives or unrelated people joined the original residents to temporaril­y shelter, and it does not seem that the IDF’s intelligen­ce personnel typically attempted to discover this and update their operationa­l decisions accordingl­y.

The appeal of human-machine teams and algorithmi­c systems is often claimed to be efficiency—but these systems cannot be scaled up indefinite­ly without generating counternor­mative and counterpro­ductive outcomes. Lavender was not intended to be the only arbiter of target legitimacy, and the targets that it recommends could be subject to exhaustive review, should its operators desire it. But under enormous pressure, IDF intelligen­ce analysts reportedly devoted almost no resources to double-checking targets, nor to double-checking bystander locations after feeding the names of targets into Where’s Dad.

Such systems are purpose-built, and officials should remember that even under emergency circumstan­ces, they should proceed with caution when expanding the frequency or scope of a computer tool. The hoped-for operationa­l benefits are not guaranteed, and as the catastroph­e in Gaza shows, the strategic—and moral—costs could be significan­t.

 ?? ?? Palestinia­n children ride bicycles on Friday in a Gaza street devastated by Israeli bombardmen­t. AFP
Palestinia­n children ride bicycles on Friday in a Gaza street devastated by Israeli bombardmen­t. AFP

Newspapers in English

Newspapers from Sri Lanka