The Atlanta Journal-Constitution

Artificial intelligen­ce in weapons spurs fears

Experts urge United Nations: Find a way to protect us.

- ByT racey Lien Los Angeles Times

NEW YORK— Tesla and Space X chief Elon Musk has joined dozens of CEOs of artififici­al intelligen­ce companies in signing an open letter urging the United Nations to ban the use of AI in weapons before the technology gets out of hand.

The letter was published Monday — the same day the U.N.’ s Gro up of Government­al Experts on Lethal Autonomous Weapons Systems was due to meet to discuss ways to protect civilians from the misuse of automated weapons. That meeting, however, has been postponed until November.

“Lethal autonomous weapons threaten to become the third revolution in warfare,” read the letter, which was also signedby the chief executives of companies such as Cafe X Technologi­es (which built anautonomo­us barista) and Plus One Robotics( whose robots automate manual labor). “Once this Pandora’s box is opened, it will be hard to close. Therefore we implore the High Contractin­g Parties to fifind a way to protect us all from these dangers.”

The letter’s sentiments echo those in another open letter that Musk — along with more than 3,000 AI

and robotics researcher­s, plus others such as Stephen Hawking and Steve Wozniak — signed nearly two years ago. In the 2015 letter, the signatorie­s warned of the dangers of artififici­al intelligen­ce in weapons, which could be used in “assassinat­ions, destabiliz­ing nations, subduing population­s and selectivel­y killing a particular ethnic group.”

Many nations are already familiar with drone warfare, in which human-piloted drones are deployed in lieu of putting soldiers on site. Lower costs, as well as the fact that they don’t risk the lives of military personnel, have contribute­d to their rising popularity. Automated weapons would take things a step further, removing human interven- tion entirely, and potentiall­y improving effifficie­ncy. But it could also open a whole new can of worms, according to the 2015 letter, “lowering the threshold for going to battle” and creating a global arms race in which lethal technology can be mass-produced, deployed, hacked and misused.

For example, the letter says, there could be armed quadcopter­s that search for and eliminate people who meet pre-defifined criteria.

“Artififici­al intelligen­ce technology has reached a point where the deployment of such systems is — practicall­y, if not legally— feasible within years, not decades, and the stakes are high,” the 2015 letter read. “It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.”

Musk has long been wary of the proliferat­ion of artififici­al intelligen­ce, warning of its potential dangers as far back as 2014 when he drew a comparison between the future of AI and the fifilm “The Terminator.” Musk is also a sponsor of Open AI, a nonprofit that he co-founded with entreprene­urs such as Peter Thiel and Reid Hoffman to research and build “safe” artififici­al intelligen­ce, whose benefits are “as widely and evenly distribute­d as possible.”

Earlier this year, Musk unveiled details about his new venture Neuralink, a California company that plans to develop a device that can be implanted into the brain

and help people who have certain brain injuries, such as strokes. The device would enable a person’s brain to connect wirelessly with the cloud, as well as with computers and with other brains that have the implant.

The end goal of the device, Musk said, is to fifight potentiall­y dangerous applicatio­ns of artififici­al intelligen­ce.

“We’re going to have the choice of either being left behind and being effectivel­y useless or like a pet — you know, like a house cat or something — or eventually fifiguring out some way to be symbiotic and merge with AI,” Musk said in a story on the website Wait But Why.

Musk’s views of the risks of artififici­al intelligen­ce have clashed with those of Facebook’s Mark Zuckerberg. Last month, Zuckerberg called Musk’s warnings overblown and described himself as “optimistic.”

Musk shot back by saying Zuckerberg’s understand­ing of the subject was “limited.”

 ?? THE NEWYORK TIMES ?? (Fromleft) Paul Christiano, Dario Amodei and Geoffffrey Irvingwrit­e equations on awhiteboar­d at OpenAI, the artififici­al intelligen­ce lab founded by Elon Musk, in San Francisco. Someresear­chers areworking onways to lower the risks of artififici­al...
THE NEWYORK TIMES (Fromleft) Paul Christiano, Dario Amodei and Geoffffrey Irvingwrit­e equations on awhiteboar­d at OpenAI, the artififici­al intelligen­ce lab founded by Elon Musk, in San Francisco. Someresear­chers areworking onways to lower the risks of artififici­al...
 ??  ?? TeslaMotor­s CEOand SpaceX CEOand CTO, Elon Musk, is among experts concerned about the use of artififici­al intelligen­ce in weapons.
TeslaMotor­s CEOand SpaceX CEOand CTO, Elon Musk, is among experts concerned about the use of artififici­al intelligen­ce in weapons.

Newspapers in English

Newspapers from United States