Newsweek

War by Other Means

Artificial Intelligen­ce Technology is on the Verge of Transformi­ng the Nature of War and Conflict

- BY DAVID H. FREEDMAN

Artificial intelligen­ce technology is on the verge of transformi­ng the nature of war and conflict.

GET SMART The U.S. military is spending more than $1 billion to integrate artificial intelligen­ce into its weapons. Right: the Army’s autonomous vehicle, Origin, prepares for a practice run at the Yuma Proving Ground in Arizona.

On August 29th, three days After A suicide bomber killed 13 American soldiers and 160 civilians at Kabul airport, U.S. military intelligen­ce was tracking what was thought to be another potentiall­y devastatin­g attack: a car driving towards the airport carrying “packages” that looked suspicious­ly like explosives. The plan was to lock in on the car by video with one of the Army’s Reaper drones and destroy it with a Hellfire missile at a moment when there were no innocent civilians nearby. Sure enough, the car came to a stop at a quiet spot.

The tactical commander, most likely working at Creech Air Force Base in Nevada, had received the green light from General Kenneth F. Mckenzie Jr., the head of U.S. Central Command in Tampa, Florida. Since video feeds have to ricochet among military commanders spread out around the world, they are often delayed by several seconds. In this case, that lag may have been time enough for a handful of civilians to approach the target vehicle, according to the U.S. military. The blast killed as many as ten Afghan civilians, including seven children, and raised an internatio­nal outcry. Doubts have surfaced over whether the car even posed a threat in the first place.

As military strategist­s ponder how to prevent future threats from ISIS, al Qaeda and other groups that could arise in Taliban-controlled Afghanista­n—or any other distant location, for that matter—they are searching for a better way of attacking from afar. That search is leading in a disturbing direction: letting the machines decide when, and perhaps whom, to kill.

In coming years, Reapers and other U.S. drones will be equipped with advanced artificial intelligen­ce

technology. That raises a startling scenario: military drones squirreled away in tiny, unmanned bases in or near Afghanista­n, ready to take off, scan the territory, instantly analyze the images they take in, identify and target terrorist activity, ensure the target is clear of civilians, fire a missile, confirm the kill and return to base—all with little or no human interventi­on.

The motivation to equip Reaper drones with artificial intelligen­ce (AI) is not primarily humanitari­an, of course. The true purpose of AI weaponry is to achieve overwhelmi­ng military advantage—and in this respect, AI is highly promising. At a time when the U.S. has pulled its troops from Afghanista­n and is reluctant to commit them to other conflicts around the world, the ability to attack from a distance with unmanned weapons is becoming a key element of U.S. military strategy. Artificial intelligen­ce, by endowing machines with the ability to make battlefiel­d decisions on their own, makes this strategy viable.

Integratin­g AI technology into weapons systems opens the door to making them smaller and cheaper than manned versions and capable of reacting faster and hitting targets more accurately, without risking the lives of soldiers. Plans are being laid to include AI not only in autonomous Reapers but a whole arsenal of weaponry, ranging from fighter jets to submarines to missiles, which will be able to strike at terrorists and enemy forces entirely under their own control–humans, optional.

Nations aren’t in the habit of showcasing their most advanced technology. Judging from what’s come to light in various reports, Ai-equipped weapons are coming online fast. Progress (if you can call it that) toward ever-more capable autonomous military machines has accelerate­d in recent years, thanks both to the huge strides in the field of AI and enormous investment­s by Russia, China, the U.S. and other countries eager to get an Ai-powered edge in military might—or at least to not fall too far behind their rivals.

Russia has robotic tanks and missiles that can pick their own targets. China has unmanned mobile rocket launchers and submarines and other AI weapons under developmen­t. Turkey, Israel and Iran are pursuing AI weapons. The U.S., meanwhile, has already deployed autonomous sub-hunting ships and tank-seeking missiles—and much more is in the works. The Pentagon is currently spending more than $1 billion a year on Ai—and that includes only spending included in publicly released budgets. About 10 percent of the Pentagon’s budget is cloaked in secrecy, and hundreds of billions more are buried in the budgets of other agencies.

Scientists, policy analysts and human rights advocates have raised concerns about the coming AI arsenals. Some say such weapons are vulnerable to errors and hackers that could threaten innocent people. Others worry that letting machines initiate deadly attacks on their own is unethical and poses an unacceptab­le moral risk. Still others fear that the rise of AI weapons gives rogue nations and terrorist organizati­ons the ability to punch above their weight, shaking up the global balance of power, leading to more confrontat­ions (potentiall­y

THE TRUE PURPOSE of AI weaponry Is to Achieve overwhelmi­ng military Advantage—and In this respect, AI Is highly promising.

involving nuclear weapons) and wars.

These objections have done nothing to slow the AI arms race. U.S. military leaders seem less concerned with such drawbacks than with keeping up with China and Russia. “AI in warfare is already happening,” says Robert Work, a former U.S. deputy secretary of defense, and co-chair of the National Security Commission on AI. “All the major global military competitor­s are exploring what more can be done with it, including the U.S.”

Regardless of who wins the race, the contours of military force—who has it and how they use it—are about to change radically.

The Leap to AI

Missile-equipped drones have been A Mainstay of U.S. anti-terrorist and other military combat for two decades, but they leave much collateral damage—between 900 and 2,200 civilians have been killed in U.S. drone strikes over the past twenty years, 300 or more of them children, according to the London-based Bureau of Investigat­ive Journalism. They’re also prone to delays in video transmissi­on that almost certainly have led to missed opportunit­ies because a brief window of opportunit­y closed before a team could give the remote pilot a green light.

An Ai-equipped drone, by contrast, could spot, validate and fire at a target in a few hundredths of a second, greatly expanding the military’s ability to strike from afar. That capability could enable more U.S. strikes, anywhere in the world, such as the assassinat­ion of Iranian general Qasem Soleimani in January 2020 while visiting Iraq. It could also give the U.S. more effective means to execute surgical but deadly responses to affronts such as the Syrian government’s past chemical-weapon attacks on its own people—without requiring sending a single U.S. soldier into the country.

Boosting the accuracy and timing of drone strikes could also reduce the U.S. military’s heavy reliance on convention­al aircraft strikes. According to the British independen­t monitoring group Airwars, those U.S. strikes have cost the lives of as many as 50,000 civilians since 2001 and put human pilots, and their $100-million aircraft, at risk.

Weapons that seek out targets without human control are not entirely new. In World War II, the U.S. fielded torpedoes that could listen for German U-boats and pursue them. Since then, militaries have deployed hundreds of different types of guns, guided missiles and drones capable of aiming themselves and locking in on targets.

What’s different about Ai-driven weapons systems is the nature and power of the weapon’s decision-making software. Until recently, any computer programs that were baked into a weapon’s control system had to be written by human programmer­s, providing step-by-step directions for accomplish­ing simple, narrow tasks in specific situations. Today, AI software enlists “machine learning” algorithms that actually write their own code after being exposed to thousands of examples of what a successful­ly completed task looks like, be it recognizin­g an enemy tank or keeping a self-driving vehicle away from trees.

The resulting code looks nothing like convention­al computer programmin­g, but its capabiliti­es go far beyond it. Convention­al autonomous weapons have to either be placed near or pointed at isolated or easily recognizab­le enemy targets, lest they lock in on the wrong object. But AI weapons can in principle simply be turned loose to watch out for or hunt down almost any type of target, deciding on their own which to attack and when. They can track targets that might vary in appearance

or behavior, switch targets and navigate unfamiliar terrain in bad weather, while recognizin­g and avoiding friendly troops or civilians.

The potential military advantages are enormous. “AI can reduce the cost and risk of any mission, and get up close to the adversary while keeping warfighter­s out of harm’s way,” says Tim Barton, chief technology officer at the Dynetics Group, which is developing unmanned air systems for the U.S. Department of Defense. “And they can take in and go through informatio­n at light speed. Humans can’t do it fast enough anymore.”

Robotic killing isn’t some futuristic possibilit­y: it is already here. That Rubicon was crossed last year in Libya, when an Ai-equipped combat drone operating entirely outside of human control killed militia rebels fighting government soldiers. Essentiall­y a real-life version of the “hunter-killer” drones depicted in the Terminator 3 film, the Turkish-made Kargu-2 “lethal autonomous weapons system” flew over the battlefiel­d, recognized the fleeing rebels, dove at them and set off an explosive charge, according to a March United Nations report that became public in May.

Russia, China, Iran and Turkey have all demonstrat­ed AI weapons. So far, the deadly Libyan attack is the only public instance of such weapons on the battlefiel­d. Still, there’s plenty of evidence it’s likely to happen more frequently in the future.

Russia’s military has been open about working furiously to take advantage of AI capabiliti­es. According to Russia’s state press agency Tass, the country is developing an Ai-guided missile that can pick its target in mid-flight; a self-targeting

AI WEAPONS can In principle simply be turned loose to watch out for or hunt down Almost Any type of target, deciding on their own which to Attack And when.

machine gun; autonomous vehicles for land, air, sea and underwater surveillan­ce and combat; a robotic tank bristling with guns, missiles and flamethrow­ers; and Ai-based radar stations, among other projects. Russia’s Military Industrial Committee, the country’s top military decision-making body, has declared its intention to turn nearly a third of its firepower over to AI by 2030.

China has been more circumspec­t about details, probably to minimize concern over its many business dealings with AI companies in Silicon Valley and elsewhere in the U.S. and around the world. But few observers doubt that the country’s vast investment in AI science and technology will spill over into weapons. Some of China’s top military and defense-industry leaders have publicly said as much, predicting that lethal “intelligen­tized” weapons will be common by 2025, and will soon help close the gap between China’s military and those of the U.S., Europe and Russia.

Iran has demonstrat­ed fully autonomous suicide drones, and its generals have promised to have them and possibly other AI weapons under developmen­t, including missiles and robots, ready for deployment by 2024. Iran has already unleashed drone strikes on Saudi Arabia, Israel, and U.S. forces in Iraq, and crippled an Israeli-owned oil tanker, killing two crew members, in a drone attack off the coast of Oman in late July. There’s no evidence any of the strikes were aided by AI, but few experts doubt Iran will enlist AI in future attacks as soon as it’s able, likely before polishing the technology and building in safeguards.

U.S. allies are also jumping into the fray. The U.K. has deployed small self-targeting missiles, tested an autonomous-vehicle-mounted machine gun and demonstrat­ed Ai-controlled missile-defense systems on its ships. Israel, meanwhile, continues to beef up its heavily employed and highly effective “Iron Dome” Patriot air-defense missile system with more and more Ai-aided capabiliti­es. So capable is the technology that the U.S. Army has installed Israeli Iron Dome batteries for border defense in New Mexico and Texas.

The U.S. hasn’t stood still, of course. In 2018 the Pentagon formed the Joint Artificial Intelligen­ce Center to spur and coordinate AI developmen­t and integratio­n throughout the military. One big reason is Russia and China’s ongoing developmen­t of “hypersonic” cruise missiles that can travel at more

“AI In warfare Is Already happening. All the major global military competitor­s Are exploring what more can be done with It, INCLUDING THE U.S.”

than five times the speed of sound. At those speeds, humans may not be able to react quickly enough to initiate defensive measures or launch counterstr­ike missiles in a “use it or lose it” situation. Speaking at a 2019 conference of defense experts, U.S. Missile Defense Agency Director Vice Admiral Jon Hill put it this way: “With the kind of speeds that we’re dealing with today, that kind of reaction time that we have to have today, there’s no other answer other than to leverage artificial intelligen­ce.”

The Pentagon has several programs under way. One involves guided, jet-powered cannon shells that can be fired in the general direction of the enemy in order to seek out targets while avoiding allies. The Navy, meanwhile, has taken delivery of two autonomous ships for a variety of missions, including finding enemy submarines, and is developing unmanned submarines. And in December the Air Force demonstrat­ed turning the navigation and radar systems of a U-2 spy plane over to AI control.

On August 3, even as the Taliban was beginning to seize control of Afghanista­n on the heels of departing American forces, Colonel Mike Jiru, a Materiel Command program executive officer for the Air Force, told Air Force Magazine that the military is planning a number of upgrades to the Reaper, the U.S.’S workhorse military drone. The upgrades include the ability to take off and land autonomous­ly, and the addition of powerful computers specifical­ly intended to run artificial intelligen­ce software.

“We’re on a pathway where leaders don’t fundamenta­lly question whether we should militarize AI,” says Ingvild Bode, an associate professor with the Centre for War Studies at the University of Southern Denmark.

Here Come the Drone Swarms

small Autonomous drones Are likely to have the most immediate impact. That’s because they’re relatively cheap and easy to produce in big numbers,

“AI CAN TAKE IN And go through Informatio­n At light speed. humans can’t do It fast enough Anymore.”

don’t require a lot of support or infrastruc­ture, and aren’t likely to wreak massive havoc if something goes wrong. Most important, thanks to AI they’re capable of providing a massive advantage in almost any type of conflict or engagement, including reprisals against terrorists, asymmetric warfare or all-out conflict between nations.

A single small autonomous drone can fly off to scout out terrorist or other enemy positions and beam back invaluable images and other data, often without being spotted. Like the Kargu-2, it can drop an explosive payload on enemy targets. Such offensive drones can serve as “loitering munitions,” simply flying around a battlefiel­d or terrorist territory until their AI identifies an appropriat­e target and goes in for the kill. Larger Ai-equipped autonomous drones, such as Israel’s Harpy, can find a radar station or other substantia­l target on its own and fire off a small missile to destroy it. Virtually every country with a large military is exploring Ai-enabled drone weaponry.

The real game-changer will be arrays, or swarms, of autonomous drones that can blanket an area with enough cameras or other types of sensors to spot and analyze almost any enemy activity. Coordinati­ng the flights of an entire network of drones is beyond the capabiliti­es of human controller­s, but perfectly doable with AI.

Keeping the swarm coordinate­d isn’t even the hardest part. The bigger challenge is making use of the vast stream of images and other data they send back. “The real value of AI is gathering and integratin­g the data coming in from large quantities of sensors, and eliminatin­g the informatio­n that isn’t going to be of interest to military operators,” says Chris Brose, chief strategy officer for Anduril, an Irvine, California, company that makes AI- and drone-based systems, among other Ai-based defense technologi­es.

The Pentagon’s Project Maven, a four-year-old program, aims to use AI to spot and track enemy activity from video feeds. Google was contributi­ng its own extensive AI developmen­t resources to the project until employees pressured the company in 2018 to withdraw over concerns about militarizi­ng AI. (The Department of Defense has denied that Project Maven is focused on military applicatio­ns, but that claim is widely discounted.)

Beyond merely spotting enemy activity, the next step is to apply AI to “battlefiel­d management”—

that is, to cut through the fog of war and help military commanders understand everything going on in a combat situation, and then decide what to do about it. That might include moving troops, selecting targets and bringing in air support and reinforcem­ents based on up-to-the-second informatio­n streaming in from drone swarms, satellites, and a range of sensors in and around the combat. “There are so many things vying for the attention of the soldier in warfare,” says Mike Heibel, program director for Northrop Grumman’s air defense team, which is working on battlefiel­d-management AI for the U.S. military. “AI has the capabiliti­es to pick out threats and send 3D informatio­n to a cannon.” Northrop Grumman has already demonstrat­ed a mobile system that does exactly that.

U.S. work on Ai-enhanced battlefiel­d management is advancing on several fronts. The U.S. National Geospatial Intelligen­ce Agency claims it has already turned AI loose on 12 million satellite images in order to spot an enemy missile launch. The Army has experiment­ally fielded an Ai-based system called Prometheus that extracts enemy activity from real-time imaging, determines on its own which of the activities meet commanders’ criteria for high-priority targets and feeds those positions to artillery weapons to automatica­lly aim them.

The Black Box Problem

the More the Military embraces Ai, the louder the chorus of objections from experts and advocates. One big concern is that Ai-guided weapons will mistakenly target civilians or friendly forces or cause more unnecessar­y casualties than human operators would.

Such concerns are well founded. AI systems can in theory be hacked by outsiders, just as any software can. The safeguards may be more robust than commercial systems, but the stakes are much higher when the results of a cyber breach are powerful weapons gone wild. In 2011 several U.S. drones deployed in the Middle East were infected with malicious viruses—a warning that software-reliant weapons are vulnerable.

Even if the military can keep its AI systems safe from hackers, it may still not be able to ensure that AI software always behaves as intended. That’s due to what’s known as the “black box” problem: Because machine-learning algorithms write their own complex, hard-to-analyze code, human software experts can’t always predict what an AI system will do in unexpected situations. Testing reduces but doesn’t eliminate the chances of ugly surprises—it can’t cover the essentiall­y infinite number of unique conditions that an Ai-controlled weapon might confront in the chaos of conflict.

Self-driving cars, which are controlled by AI programs roughly similar to those employed in military applicatio­ns, provide a useful analog. In 2018, a driverless Uber hit and killed a pedestrian in Tempe, Arizona. The pedestrian had been walking a bike across the road outside a crosswalk—a scenario that had simply never come up in testing. “AI can get it wrong in ways that are entirely alien to humans,” says the University of Southern Denmark’s Bode. “We can’t test the ability of a system to differenti­ate between civilians and combatants in all situations.”

It gets worse. An enemy can take advantage of known weaknesses in AI systems. They could alter the appearance of uniforms, buildings and weapons or change their behavior in ways that trip up

the algorithms. Driverless cars have been purposely fooled into errors by stickers placed on traffic signs, phony road markings and lights shined onto their sensors. “Can you make an airliner full of passengers look like an enemy target and cause an AI weapons system to behave badly?” Dynetics’ Barton asks. In combat, he adds the stakes for getting it right are far higher. “We have to bake in that protection from the beginning, not bolt it on later.”

Even if military AI systems work exactly as intended, is it ethical to give machines the authority to destroy and kill? Work, the former defense deputy secretary, insists the U.S. military is strictly committed to keeping a human decision-maker in the “kill chain” so that no weapon will pick a target and fire on its own without an OK. But other nations may not be as careful, he says. “As far as we know, the U.S. military is the only one that has establishe­d ethical principles for AI.”

Twenty-two nations have asked the United Nations to ban automated weapons capable of operating outside human oversight, but so far no agreements have been signed. Human Rights Watch and other advocacy groups have called for similar bans to no avail. If Russia, China and others give AI weapons the authority to choose targets, the U.S. may face a choice: go along or operate at a military disadvanta­ge.

That sets up a race-to-the-bottom in which the least ethical or most careless adversary—one that is most aggressive about fielding Ai-enabled weaponry, regardless of reliabilit­y and safeguards— forces others to follow suit. Nuclear weapons could be placed under the control of flawed AI systems that watch for signs that someone else’s AI nukes are about to launch. AI is “increasing the risk of inadverten­t or accidental escalation caused by mispercept­ion or miscalcula­tion,” says James Johnson, a foreign-policy researcher at Ireland’s Dublin City University and author of Artificial Intelligen­ce and the Future of Warfare. (Manchester University Press, September 2021).

Both the U.S. and Russia have repeatedly refused to allow the United Nations’ Convention on Certain Convention­al Weapons (CCW), the main internatio­nal body for weapons agreements, to ban lethal Ai-controlled weapons. Meetings to discuss revisiting the CCW are planned for December, but there’s little optimism an agreement will be reached; among the most powerful nations, only China has expressed support for such a treaty. NATO nations have discussed the possibilit­y of an agreement, but nothing definite has emerged. If the U.S. is negotiatin­g AI weapons separately with other countries, there’s little public word of it.

Even if diplomatic efforts led to limits on the use of AI, verifying adherence would be far more difficult than, say, inspecting nuclear missile silos. Military leaders in a hostile, competitiv­e world are not known for their ability to resist advanced weaponry, regardless of consequenc­es.

“the u.s. military Is the only nation that has establishe­d ethical PRINCIPLES FOR AI.”

 ??  ??
 ??  ??
 ??  ?? BLURRED VISION
Drone operators, looking through video feeds that may have been delayed, fired a missile at what was thought to be an attack in progress, killing civilians. Clockwise from top: wreckage from the August 29 U.S. drone attack in Kabul, Afghanista­n; General Kenneth F. Mckenzie Jr.; mourners for a victim of the attack.
BLURRED VISION Drone operators, looking through video feeds that may have been delayed, fired a missile at what was thought to be an attack in progress, killing civilians. Clockwise from top: wreckage from the August 29 U.S. drone attack in Kabul, Afghanista­n; General Kenneth F. Mckenzie Jr.; mourners for a victim of the attack.
 ??  ??
 ??  ??
 ??  ??
 ??  ?? SELF SUFFICIENT
The U.S. military plans to include AI in autonomous Reapers and other weapons, including submarines and fighter jets. Top left to bottom right: An avionics specialist conducts preflight checks; an Air Force officer pilots a Reaper; ready on the tarmac at Creech Air Force Base in Nevada.
SELF SUFFICIENT The U.S. military plans to include AI in autonomous Reapers and other weapons, including submarines and fighter jets. Top left to bottom right: An avionics specialist conducts preflight checks; an Air Force officer pilots a Reaper; ready on the tarmac at Creech Air Force Base in Nevada.
 ??  ??
 ??  ??
 ??  ?? FLIGHT SCHOOL
The software driving AI weapons enlists “machine learning” algorithms that actually write their own code. Right: A test pilot from Carnegie-mellon University tries out an Ai-equipped drone at the Yuma Proving Ground in Arizona last year.
FLIGHT SCHOOL The software driving AI weapons enlists “machine learning” algorithms that actually write their own code. Right: A test pilot from Carnegie-mellon University tries out an Ai-equipped drone at the Yuma Proving Ground in Arizona last year.
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ?? FIGHTING MACHINES
Military establishm­ents around the world are working on machines that don’t need people:
1. Members of the Turkish Navy train with a drone;
2. The U.S.’S Expedition­ary Modular Autonomous Vehicle (EMAV); 3. Russia’s Eleron-3 unmanned drone; 4. Israel’s Heron 1 drone; 5. a U.S. Marine practices maneuverin­g an EMAV;
6. A U.S. Sea Hunter unmanned ship;
7. Iran’s Revolution­ary Guard conducts a drill with missiles and drones earlier this year; 8. Marines practice a medical evacuation with an EMAV.
FIGHTING MACHINES Military establishm­ents around the world are working on machines that don’t need people: 1. Members of the Turkish Navy train with a drone; 2. The U.S.’S Expedition­ary Modular Autonomous Vehicle (EMAV); 3. Russia’s Eleron-3 unmanned drone; 4. Israel’s Heron 1 drone; 5. a U.S. Marine practices maneuverin­g an EMAV; 6. A U.S. Sea Hunter unmanned ship; 7. Iran’s Revolution­ary Guard conducts a drill with missiles and drones earlier this year; 8. Marines practice a medical evacuation with an EMAV.
 ??  ??
 ??  ??
 ??  ??
 ??  ?? STRIKING OUT
By enhancing the U.S. military’s ability to strike from afar, AI drones could mean more frequent strikes at far-flung targets, such as Iranian general Qasem Soleimani, who was assassinat­ed in January 2020 while visiting Iraq. Right: Demonstrat­ions in Tehran following Soleimani’s death. Below: the Pentagon in Washington, D.C.
STRIKING OUT By enhancing the U.S. military’s ability to strike from afar, AI drones could mean more frequent strikes at far-flung targets, such as Iranian general Qasem Soleimani, who was assassinat­ed in January 2020 while visiting Iraq. Right: Demonstrat­ions in Tehran following Soleimani’s death. Below: the Pentagon in Washington, D.C.
 ??  ??
 ??  ??
 ??  ??

Newspapers in English

Newspapers from United States