The Boston Globe

Curbson new-tech drones debated

Some nations urge binding AI rules

- By Eric Lipton

UNITED NATIONS — It seems like something out of science fiction: swarms of killer robots that hunt down targets on their own and are capable of flying in for the kill without any human signing off.

But it is approachin­g reality as the United States, China, and a handful of other nations make rapid progress in developing and deploying new technology that has the potential to reshape the nature of warfare by turning life and death decisions over to autonomous drones equipped with artificial intelligen­ce programs.

That prospect is so worrying to many other government­s that they are trying to impose legally binding rules through the United Nations on the use of what militaries call lethal autonomous weapons.

“This is really one of the most significan­t inflection points for humanity,” Alexander Kmentt, Austria’s chief negotiator on the issue, said in an interview. “What’s the role of human beings in the use of force — it’s an absolutely fundamenta­l security issue, a legal issue, and an ethical issue.”

But while the UN is providing a platform for government­s to express their concerns, the process seems unlikely to yield substantiv­e legally binding restrictio­ns. The United States, Russia, Australia, Israel, and others have all argued that no new internatio­nal law is needed for now, while China wants to define any legal limit so narrowly that it would have little practical effect, arms control advocates say.

The result has been to tie the debate up in a procedural knot with little chance of progress on a legally binding mandate anytime soon. “We do not see that it is really the right time,” Konstantin Vorontsov, the deputy head of the Russian delegation to the UN, told diplomats who were packed into a basement conference room recently at the UN headquarte­rs in New York.

The debate over the risks of AI has drawn attention in recent days with the battle over control of OpenAI, perhaps the world’s leading AI company, whose leaders appeared split over whether the firm is taking sufficient account over the dangers of the technology.

The question of what limits should be placed on the use of lethal autonomous weapons has taken on new urgency, and for now has come down to whether it is enough for the UN simply to adopt nonbinding guidelines, the position supported by the United States.

“The word ‘must’ will be very difficult for our delegation to accept,” Joshua Dorosin, the chief internatio­nal agreements officer at the State Department, told other negotiator­s during a debate in May over the language of proposed restrictio­ns.

Dorosin and members of the US delegation, which includes a representa­tive from the Pentagon, have argued that instead of a new internatio­nal law, the UN should clarify that existing internatio­nal human rights laws already prohibit nations from using weapons that target civilians or cause a disproport­ionate amount of harm to them.

Arms control groups such as the Internatio­nal Committee of the Red Cross and Stop Killer Robots, along with national delegation­s including Austria, Argentina, New Zealand, Switzerlan­d, and Costa Rica, have proposed a variety of limits.

Some would seek to globally ban lethal autonomous weapons that explicitly target humans. Others would require that these weapons remain under “meaningful human control,” and that they must be used in limited areas for specific amounts of time.

Newspapers in English

Newspapers from United States