San Francisco Chronicle

U.S. military racing against China on AI

- By Tara Copp

WASHINGTON — Two Air Force fighter jets recently squared off in a dogfight in California. One was flown by a pilot. The other wasn’t.

That second jet was piloted by artificial intelligen­ce, with the Air Force’s highest-ranking civilian riding along in the front seat. It was the ultimate display of how far the Air Force has come in developing a technology with its roots in the 1950s. But it’s only a hint of the technology yet to come.

The United States is competing to stay ahead of China on AI and its use in weapon systems. The focus on AI has generated public concern that future wars will be fought by machines that select and strike targets without direct human interventi­on. Officials say this will never happen, at least not on the U.S. side. But there are questions about what a potential adversary would allow, and the military sees no alternativ­e but to get U.S. capabiliti­es fielded fast.

“Whether you want to call it a race or not, it certainly is,” said Adm. Christophe­r Grady, vice chairman of the Joint Chiefs of Staff. “Both of us have recognized that this will be a very critical element of the future battlefiel­d. China’s working on it as hard as we are.”

A look at the history of military developmen­t of AI, what technologi­es are on the horizon and how they will be kept under control:

AI’s ‘big bang’

AI’s roots in the military are actually a hybrid of machine learning and autonomy. Machine learning occurs when a computer analyzes data and rule sets to reach conclusion­s. Autonomy occurs when those conclusion­s are applied to take action without further human input.

This took an early form in the 1960s and 1970s with the developmen­t of the Navy’s Aegis missile defense system. Aegis was trained through a series of human-programmed if/then rule sets to be able to detect and intercept incoming missiles autonomous­ly, and more rapidly than a human could. But the Aegis system was not designed to learn from its decisions, and its reactions were limited to the rule set it had.

“If a system uses ‘if/then’ it is probably not machine learning, which is a field of AI that involves creating systems that learn from data,” said Air Force Lt. Col. Christophe­r Berardi, who is assigned to the Massachuse­tts Institute of Technology to assist with the Air Force’s AI developmen­t.

AI took a major step forward in 2012 when the combinatio­n of big data and advanced computing power enabled computers to begin analyzing the informatio­n and writing the rule sets themselves. It is what AI experts have called AI’s “big bang.”

The new data created by a computer writing the rules is artificial intelligen­ce. Systems can be programmed to act autonomous­ly from the conclusion­s reached from machine-written rules, which is a form of AI-enabled autonomy.

Alternativ­e to GPS

Air Force Secretary Frank Kendall got a taste of that advanced warfightin­g this month when he flew on Vista, the first F-16 fighter jet to be controlled by AI, in a dogfightin­g exercise over California’s Edwards Air Force Base.

While that jet is the most visible sign of the AI work underway, there are hundreds of ongoing AI projects across the Pentagon.

At MIT, service members worked to clear thousands of hours of recorded pilot conversati­ons to create a data set from the flood of messages exchanged between crews and air operations centers during flights, so the AI could learn the difference between critical messages such as a runway being closed and mundane cockpit chatter. The goal was to have the AI learn which messages are critical to elevate, in order to ensure that controller­s see them faster.

In another significan­t project, the military is working on an AI alternativ­e to GPS satellite-dependent navigation.

In a future war, high-value GPS satellites would likely be hit or interfered with. The loss of GPS could blind U.S. communicat­ion, navigation and banking systems and make the U.S. military’s fleet of aircraft and warships less able to coordinate a response.

So last year the Air Force flew an AI program — loaded onto a laptop that was strapped to the floor of a C-17 military cargo plane — to work on an alternativ­e solution using the Earth’s magnetic fields.

It has been known that aircraft could navigate by following those magnetic fields, but so far that hasn’t been practical because each aircraft generates so much of its own electromag­netic noise that there has been no good way to filter for just the Earth’s emissions.

“Magnetomet­ers are very sensitive,” said Col. Garry Floyd, director for the Department of Air Force-MIT Artificial Intelligen­ce Accelerato­r program. “If you turn on the strobe lights on a C-17, we would see it.”

The AI learned through the flights and reams of data which signals to ignore and which to follow, and the results “were very, very impressive,” Floyd said. “We’re talking tactical airdrop quality.”

“We think we may have added an arrow to the quiver in the things we can do, should we end up operating in a GPS-denied environmen­t. Which we will,” Floyd said.

The AI so far has been tested only on the C-17. Other aircraft will also be tested, and if it works it could give the military another way to operate if GPS goes down.

Safety rails, pilot speak

Vista, the AI-controlled F-16, has considerab­le safety rails as the Air Force trains it. There are mechanical limits that keep the still-learning AI from executing maneuvers that would put the plane in danger. There is a safety pilot, too, who can take over control from the AI with the push of a button.

The algorithm cannot learn during a flight, so each time up it has only the data and rule sets it has created from previous flights. When a new flight is over, the algorithm is transferre­d back onto a simulator where it is fed new data gathered in flight to learn from, create new rule sets and improve its performanc­e.

But the AI is learning fast. Because of the super computing speed AI uses to analyze data, and then flying those new rule sets in the simulator, its pace in finding the most efficient way to fly and maneuver has already led it to beat some human pilots in dogfightin­g exercises.

But safety is still a critical concern, and officials said the most important way to take safety into account is to control what data is reinserted into the simulator for the AI to learn from. In the jet’s case, it’s making sure the data reflects safe flying. Ultimately, the Air Force hopes that a version of the AI being developed can serve as the brain for a fleet of 1,000 unmanned warplanes under developmen­t by General Atomics and Anduril.

In the experiment training AI on how pilots communicat­e, the service members assigned to MIT cleaned up the recordings to remove classified informatio­n and the pilots’ sometimes salty language.

Learning how pilots communicat­e is “a reflection of command and control, of how pilots think. The machines need to understand that too if they’re going to get really, really good,” said Grady, the Joint Chiefs vice chairman. “They don’t need to learn how to cuss.”

 ?? Damian Dovarganes/Associated Press ?? The XQ-67A unmanned aerial vehicle, a prototype of an AI drone fleet being developed under the Air Force Research Laboratory, sits at General Atomics’ test facility in Palmdale.
Damian Dovarganes/Associated Press The XQ-67A unmanned aerial vehicle, a prototype of an AI drone fleet being developed under the Air Force Research Laboratory, sits at General Atomics’ test facility in Palmdale.

Newspapers in English

Newspapers from United States