Curbson new-tech drones debated
Some nations urge binding AI rules
UNITED NATIONS — It seems like something out of science fiction: swarms of killer robots that hunt down targets on their own and are capable of flying in for the kill without any human signing off.
But it is approaching reality as the United States, China, and a handful of other nations make rapid progress in developing and deploying new technology that has the potential to reshape the nature of warfare by turning life and death decisions over to autonomous drones equipped with artificial intelligence programs.
That prospect is so worrying to many other governments that they are trying to impose legally binding rules through the United Nations on the use of what militaries call lethal autonomous weapons.
“This is really one of the most significant inflection points for humanity,” Alexander Kmentt, Austria’s chief negotiator on the issue, said in an interview. “What’s the role of human beings in the use of force — it’s an absolutely fundamental security issue, a legal issue, and an ethical issue.”
But while the UN is providing a platform for governments to express their concerns, the process seems unlikely to yield substantive legally binding restrictions. The United States, Russia, Australia, Israel, and others have all argued that no new international law is needed for now, while China wants to define any legal limit so narrowly that it would have little practical effect, arms control advocates say.
The result has been to tie the debate up in a procedural knot with little chance of progress on a legally binding mandate anytime soon. “We do not see that it is really the right time,” Konstantin Vorontsov, the deputy head of the Russian delegation to the UN, told diplomats who were packed into a basement conference room recently at the UN headquarters in New York.
The debate over the risks of AI has drawn attention in recent days with the battle over control of OpenAI, perhaps the world’s leading AI company, whose leaders appeared split over whether the firm is taking sufficient account over the dangers of the technology.
The question of what limits should be placed on the use of lethal autonomous weapons has taken on new urgency, and for now has come down to whether it is enough for the UN simply to adopt nonbinding guidelines, the position supported by the United States.
“The word ‘must’ will be very difficult for our delegation to accept,” Joshua Dorosin, the chief international agreements officer at the State Department, told other negotiators during a debate in May over the language of proposed restrictions.
Dorosin and members of the US delegation, which includes a representative from the Pentagon, have argued that instead of a new international law, the UN should clarify that existing international human rights laws already prohibit nations from using weapons that target civilians or cause a disproportionate amount of harm to them.
Arms control groups such as the International Committee of the Red Cross and Stop Killer Robots, along with national delegations including Austria, Argentina, New Zealand, Switzerland, and Costa Rica, have proposed a variety of limits.
Some would seek to globally ban lethal autonomous weapons that explicitly target humans. Others would require that these weapons remain under “meaningful human control,” and that they must be used in limited areas for specific amounts of time.