Nations hesitant to set up rules on AI weapons
Smaller nations want guardrails on use as technology improves rapidly
It seems like something out of science fiction: swarms of killer robots that hunt down targets on their own and are capable of flying in for the kill without any human signing off.
But it is approaching reality as the United States, China and a handful of other nations make rapid progress in developing and deploying new technology that has the potential to reshape the nature of warfare by turning life and death decisions over to autonomous drones equipped with artificial intelligence programs.
That prospect is so worrying to many other governments they are trying to focus attention on it with proposals at the United Nations to impose legally binding rules on the use of what militaries call lethal autonomous weapons.
“This is really one of the most significant inflection points for humanity,” Alexander Kmentt, Austria’s chief negotiator on the issue, said in an interview. “What’s the role of human beings in the use of force — it’s an absolutely fundamental security issue, a legal issue and an ethical issue.”
But while the U.N. is providing a platform for governments to express their concerns, the process seems unlikely to yield substantive new legally binding restrictions. The United States, Russia, Australia, Israel and others have all argued no new international law is needed for now, while China wants to define any legal limit so narrowly it would have little practical effect, arms control advocates say.
The result has been to tie the debate up in a procedural knot with little chance of progress on a legally binding mandate anytime soon.
“We do not see that it is really the right time,” Konstantin Vorontsov, the deputy head of the Russian delegation to the U.N., told diplomats who were packed into a basement conference room recently at the U.N. headquarters in New York.
Last week, officials from China and the United States discussed a related issue: potential limits on the use of AI in decisions about deploying nuclear weapons.
Against that backdrop, the question of what limits should be placed on the use of lethal autonomous weapons has taken on new urgency, and for now has come down to whether it is enough for the U.N. simply to adopt nonbinding guidelines, the position supported by the United States.
“The word ‘must’ will be very difficult for our delegation to accept,” Joshua Dorosin, the chief international agreements officer at the State Department, told other negotiators during a debate in May over the language of proposed restrictions.
Dorosin and members of the U.S. delegation, which includes a representative from the Pentagon, have argued instead of a new international law, the U.N. should clarify existing international human rights laws already prohibit nations from using weapons that target civilians or cause a disproportionate amount of harm to them.
But the position being taken by the major powers has only increased the anxiety among smaller nations, who say they are worried that lethal autonomous weapons might become common on the battlefield before there is any agreement on rules for their use.
“Complacency does not seem to be an option anymore,” Ambassador Khalil Hashmi of Pakistan said during a meeting at U.N. headquarters. “The window of opportunity to act is rapidly diminishing as we prepare for a technological breakout.”
Rapid advances in AI and the intense use of drones in conflicts in Ukraine and the Middle East have combined to make the issue that much more urgent. So far, drones generally rely on human operators to carry out lethal missions, but software is being developed that soon will allow them to find and select targets more on their own.
“This isn’t the plot of a dystopian novel, but a looming reality,” Gaston Browne, the prime minister of Antigua and Barbuda, told officials at a recent U.N. meeting.