The Commercial Appeal

Pentagon seeks ethical principles in AI handling

- Matt O’brien ASSOCIATED PRESS

The Pentagon is adopting new ethical principles as it prepares to accelerate its use of artificial intelligen­ce technology on the battlefield.

The new principles call for people to “exercise appropriat­e levels of judgment and care” when deploying and using AI systems, such as those that scan aerial imagery to look for targets.

They also say decisions made by automated systems should be “traceable” and “governable,” which means “there has to be a way to disengage or deactivate” them if they are demonstrat­ing unintended behavior, said Air Force Lt. Gen. Jack Shanahan, director of the Pentagon’s Joint Artificial Intelligen­ce Center.

The Pentagon’s push to speed up its AI capabiliti­es fueled a fight between tech companies over a $10 billion cloud computing contract known as the Joint Enterprise Defense Infrastruc­ture, or JEDI. Microsoft won the contract in October but hasn’t been able to get started on the 10-year project because Amazon sued the Pentagon, arguing that President Donald Trump’s antipathy toward Amazon and its CEO Jeff Bezos hurt the company’s chances at winning the bid.

An existing 2012 military directive requires humans to be in control of automated weapons but doesn’t address broader uses of AI. The new U.S. principles are meant to guide both combat and non-combat applicatio­ns, from intelligen­ce-gathering and surveillan­ce operations to predicting maintenanc­e problems in planes or ships. The approach outlined Monday follows recommenda­tions made last year by the Defense Innovation Board, a group led by former Google CEO Eric Schmidt.

While the Pentagon acknowledg­ed that AI “raises new ethical ambiguitie­s and risks,” the new principles fall short of stronger restrictio­ns favored by arms control advocates.

“I worry that the principles are a bit of an ethics-washing project,” said Lucy Suchman, an anthropolo­gist who studies the role of AI in warfare. “The word ‘appropriat­e’ is open to a lot of interpreta­tions.”

Shanahan said the principles are intentiona­lly broad to avoid handcuffing the U.S. military with specific restrictio­ns that could become outdated. “Tech adapts. Tech evolves,” he said. The Pentagon hit a roadblock in its AI efforts in 2018 after internal protests at Google led the tech company to drop out of the military’s Project Maven, which uses algorithms to interpret aerial images from conflict zones. Other companies have since filled the vacuum. Shanahan said the new principles are helping to regain support from the tech industry, where “there was a thirst for having this discussion.”

“Sometimes I think the angst is a little hyped, but we do have people who have serious concerns about working with the Department of Defense,” he said.

Shanahan said the guidance also helps secure American technologi­cal advantage as China and Russia pursue military AI with little attention paid to ethical concerns.

University of Richmond law professor Rebecca Crootof said adopting principles is a good step, but the military will need to show it can critically evaluate the huge data troves used by AI systems, as well as cybersecur­ity risks.

 ?? SUSAN WALSH/AP ?? The Pentagon, headed by Defense Secretary Mark Esper, is accelerati­ng its use of AI technology on the battlefield.
SUSAN WALSH/AP The Pentagon, headed by Defense Secretary Mark Esper, is accelerati­ng its use of AI technology on the battlefield.

Newspapers in English

Newspapers from United States