Los Angeles Times

Taking aim at AI in weapons

‘Once this Pandora’s box is opened, it will be hard to close,’ Elon Musk and others say.

- By Tracey Lien

Elon Musk and others urge the United Nations to ban artificial intelligen­ce in armaments.

NEW YORK — Tesla and SpaceX Chief Executive Elon Musk has joined dozens of CEOs of artificial intelligen­ce companies in signing an open letter urging the United Nations to ban the use of AI in weapons before the technology gets out of hand.

The letter was published Monday — the same day the U.N.’s Group of Government­al Experts on Lethal Autonomous Weapons Systems was to discuss ways to protect civilians from the misuse of automated weapons. That meeting, however, has been postponed until November.

“Lethal autonomous weapons threaten to become the third revolution in warfare,” read the letter, which was also signed by the chief executives of companies such as Cafe X Technologi­es (which built the autonomous barista) and PlusOne Robotics (whose robots automate manual labor). “Once this Pandora’s box is opened, it will be hard to close. Therefore we implore the High Contractin­g Parties to find a way to protect us all from these dangers.”

The letter’s sentiments echo those in another open letter that Musk — along with more than 3,000 AI and robotics researcher­s, plus others such as physicist Stephen Hawking and Apple co-founder Steve Wozniak — signed nearly two years ago. In the 2015 letter, the signatorie­s warned of the dangers of artificial intelligen­ce in weapons, which could be used in “assassinat­ions, destabiliz­ing nations, subduing population­s and selectivel­y killing a particular ethnic group.”

Many nations are already familiar with drone warfare, in which human-piloted drones are deployed in lieu of putting soldiers on site. Lower costs, as well as the fact that they don’t risk the lives of military personnel, have contribute­d to their rising popularity. Future capabiliti­es for unmanned aerial vehicles could include autonomous takeoffs and landings, while underwater drones could eventually roam the seas for weeks or months to collect data to send back to human crews on land or on ships.

Automated weapons would take things a step further, removing human interventi­on entirely, and potentiall­y improving efficiency. But they could also open a whole new can of worms, according to the 2015 letter, “lowering the threshold for going to battle” and creating a global arms race in which lethal technology can be mass-produced, deployed, hacked and misused.

For example, the letter says, there could be armed quadcopter­s that search for and eliminate people who meet pre-defined criteria.

“Artificial intelligen­ce technology has reached a point where the deployment of such systems is — practicall­y, if not legally — feasible within years, not decades, and the stakes are high,” the 2015 letter read. “It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.”

Philip Finnegan, director of corporate analysis at Teal Group, said there has been “no appetite” in the U.S. military for removing the human decision maker from the equation and allowing robots to target foes autonomous­ly.

“The U.S. military has stressed it’s not interested,” he said.

Musk has long been wary of the proliferat­ion of artificial intelligen­ce, warning of its potential dangers as far back as 2014 when he drew a comparison between the future of AI and the film “The Terminator.” Musk is also a sponsor of OpenAI, a nonprofit he co-founded with entreprene­urs such as Peter Thiel and Reid Hoffman to research and build “safe” artificial intelligen­ce, whose benefits are “as widely and evenly distribute­d as possible.”

This year, Musk unveiled details about his new venture Neuralink, a California company that plans to develop a device that can be implanted into the brain and help people who have certain brain injuries, such as strokes. The device would enable a person’s brain to connect wirelessly with the cloud, as well as with computers and with other brains that have the implant.

The end goal of the device, Musk said, is to fight potentiall­y dangerous applicatio­ns of AI.

“We’re going to have the choice of either being left behind and being effectivel­y useless or like a pet — you know, like a house cat or something — or eventually figuring out some way to be symbiotic and merge with AI,” Musk said in a story on the website Wait But Why.

Musk’s views of the risks of artificial intelligen­ce have clashed with those of Facebook’s Mark Zuckerberg as well as others researchin­g AI. Last month, Zuckerberg called Musk’s warnings overblown and described himself as “optimistic.”

Musk shot back by saying Zuckerberg’s understand­ing of the subject was “limited.”

tracey.lien@latimes.com Twitter: @traceylien Times staff writer Samantha Masunaga contribute­d to this report.

 ?? Brendan Smialowski AFP/Getty Images ?? “LETHAL autonomous weapons threaten to become the third revolution in warfare,” says a letter to the U.N. from Elon Musk and the CEOs of AI companies.
Brendan Smialowski AFP/Getty Images “LETHAL autonomous weapons threaten to become the third revolution in warfare,” says a letter to the U.N. from Elon Musk and the CEOs of AI companies.

Newspapers in English

Newspapers from United States