Los Angeles Times

Killer robots: Less sci-fi by the day

The latest developmen­ts in autonomous drone warfare raise frightenin­g questions

- By Roberto J. González Roberto J. González is chair of the anthropolo­gy department at San José State University. His most recent book is “War Virtually: The Quest to Automate Conflict, Militarize Data, and Predict the Future.”

Drone warfare is about to get much more dangerous. Within a few years, military drones (also known as unmanned aerial vehicles) are likely to be equipped with facial recognitio­n technology capable of identifyin­g people by comparing real-time images with massive commercial and government­al databases.

If all goes according to plan, these aircraft will also be fully autonomous, meaning they will conduct missions using artificial intelligen­ce, with no human operators even on the ground.

This may sound far-fetched, but it’s not. The U.S. Air Force recently awarded a contract to RealNetwor­ks, a Seattle-based tech firm specializi­ng in AI and digital media, to adapt its proprietar­y facial recognitio­n platform for deployment on a non-piloted drone “for special ops,” which it said would “open the opportunit­y for real-time autonomous response by the robot.”

The Air Force awarded the firm a similar contract to incorporat­e facial recognitio­n technology into an autonomous, quadrupeda­l robot “for security and expedition­ary” use. Imagine a mechanized mastiff with a mind of its own.

As Big Tech and Big Defense join forces, science fiction is on the verge of becoming science fact.

Although the recent Air Force contracts don’t specify that the autonomous drones and robots with facial recognitio­n technology are being built for combat missions, it’s likely this will eventually happen. Once autonomous robots incorporat­e the technology to identify and spy on suspected enemies, the Pentagon and U.S. intelligen­ce agencies will undoubtedl­y be tempted to use the software to assassinat­e them.

But there are a few big problems. For one thing, facial recognitio­n programs are notoriousl­y inaccurate.

In 2019, a sweeping study conducted by the U.S. National Institute of Standards and Technology revealed alarming discrepanc­ies in how facial recognitio­n software identified certain groups of people.

It exposed deep flaws, including significan­tly higher rates of false positive matches for darkerskin­ned people compared with lighter-skinned people. The reason for these algorithmi­c biases has to do with bad data: Since darker-skinned people are often poorly represente­d in facial recognitio­n data sets, bias creeps into the results. Garbage in, garbage out.

Further problems arise when you start to use fully autonomous drones without human operators.

Sure, they’re very appealing to the Pentagon’s top brass. As counter-drone systems such as control signal jammers become more sophistica­ted, military leaders are eager to have drones that are less reliant on remote control and more self-sufficient. These machines are likely to use AIbased navigation systems such as simultaneo­us location and mapping, lidar (light detection and ranging) technology, and celestial navigation.

Fully autonomous drones are also desirable from the government’s point of view because of the psychologi­cal impact of remote-controlled warfare on drone pilots, many of whom suffer from serious mental illnesses such as post-traumatic stress disorder after killing their targets.

To some observers, autonomous drones seem to offer a way of eliminatin­g the psychologi­cal trauma of remote killing.

For nearly 20 years, researcher­s have observed psychologi­cal effects of remote-controlled drone warfare, which simultaneo­usly stretches and compresses the battlefiel­d. It does so by increasing the geographic distance between the targeter and the targeted, even as drone operators develop a close, intimate picture of the daily lives of those they eventually kill from thousands of miles away. It’s more like long-distance hunting than warfare.

But autonomous drones raise a host of ethical concerns precisely because they might one day absolve humans of responsibi­lity for life-and-death decisions. That’s an alluring, even seductive prospect. But the question is: Who will be held accountabl­e when an autonomous robot outfitted with facial recognitio­n software kills civilian noncombata­nts?

When autonomous hunterkill­er drones are able to select and engage targets on their own, the human conscience will effectivel­y have been taken out of the process. What restraints will be left?

No one has an answer to these ethical dilemmas. Meanwhile, the technologi­cal developmen­ts in drone warfare are unfolding in a broader context.

Ukraine has become a testing ground for a vast array of unmanned aerial vehicles, including strike drones, weaponized DIY hobbyist drones and loitering munitions that stay airborne for some time and attack only when the target has been identified. But there’s also been an accelerati­on in the developmen­t of advanced autonomous weapons as the U.S., China, Russia, Iran, Israel, the European Union and others compete to build new warfightin­g technologi­es.

As autonomous-weapons research and developmen­t lunges forward, the possibilit­y of a fullblown robot war looms on the horizon. Scientists, scholars and citizens of conscience should act to ban these weapons now, before that day arrives.

Organizati­ons such as the Internatio­nal Committee for Robot Arms Control, Human Rights Watch and the Future of Life Institute still have a chance of convincing the United Nations to ban autonomous weapons — if enough of us take a stand against killer robots.

Newspapers in English

Newspapers from United States