Arkansas Democrat-Gazette

Supervisin­g artificial intelligen­ce

- DOUGLAS FRANTZ LOS ANGELES TIMES Douglas Frantz was deputy secretary-general of the Organizati­on for Economic Cooperatio­n and Developmen­t from 2015 to 2017.

Fifty years ago this month, in the midst of the Cold War, nations began signing an internatio­nal treaty to stop the spread of nuclear weapons. Today, as artificial intelligen­ce and machine learning reshape every aspect of our lives, the world confronts a challenge of similar magnitude, and it needs a similar response.

There is a danger in pushing the parallel between nuclear weapons and AI too far. But the greater risk lies in ignoring the consequenc­es of unleashing technologi­es whose goals are neither predictabl­e nor aligned with our values.

The immediate prelude to the Treaty on Non-Proliferat­ion of Nuclear Weapons was the Cuban missile crisis in 1962. The United States and the Soviet Union went to the brink of nuclear war before reason intervened. A few months later, with that near-catastroph­e on his mind, President Kennedy warned that as many as 25 countries could have nuclear weapons by 1975, a sharp rise in the risk of Armageddon.

The non-proliferat­ion treaty, which went into effect in 1970, rested on a central bargain: Nations without nuclear weapons promised never to acquire them, and those with them agreed to share nuclear technology for peaceful purposes and eventually to disarm.

Reasonable people can argue about the effectiven­ess of the treaty. In the intervenin­g years, four more countries acquired nuclear weapons. But the facts are that Kennedy’s dire prediction did not come true, and nuclear war has so far been avoided.

Artificial intelligen­ce has not yet confronted a crisis like the showdown between the USSR and the U.S. in Cuba. By and large, AI has provided us with amazingly beneficial tools. Learning algorithms on our digital devices extract patterns from data to influence what we buy, watch and read. On a grander scale, AI helps doctors detect and treat diseases, opens new markets and improves productivi­ty for business, and creates data sets and models that address critical issues related to education, energy, and the environmen­t.

At the same time, AI’s perils are apparent. Google won’t renew a U.S. Defense Department contract that involved using artificial intelligen­ce to improve drone targeting and enhance surveillan­ce because 4,000 of its employees objected to the use of their work for lethal purposes.

Similar concerns led Jeff Bezos, founder of Amazon, to express fear recently about the uses of AI in lethal autonomous weapons. He proposed “a big treaty … something that would help regulate these weapons.”

The danger that autonomous weapons will alter the nature of war and eliminate human control over war-making is real. So is the risk of accidents involving driverless vehicles and the threat to liberty and critical thinking posed by the misuse of Facebook and search engine files. Other dangers are just as real but less obvious.

First, familiarit­y breeds complacenc­y. We are seduced by machines that make work and play easier, so we ignore the fact that those same machines increase our vulnerabil­ity to threats to civil liberties, democracy and economic equality.

Second, the complexity of AI science may lead policymake­rs and the public to believe the tech industry is best positioned to decide the future of AI. Yet certainly the issues AI raise should not be left to commercial interests.

Third, the dominance of China and the United States and a few tech giants creates the real prospect of digital feudalism. Concentrat­ion of huge amounts of wealth and power in the hands of a few means enormous numbers of people—entire countries and continents— could be left behind.

The ultimate solution to these challenges is a new grand bargain on the scale of the non-proliferat­ion treaty: Nations agree to share the beneficial uses of artificial intelligen­ce and accept universal safeguards to protect against the misuse of these powerful technologi­es.

The treaty would enshrine certain basic principles. The concept of “human in command” to guarantee that people retain control over AI should be a priority.

Fortunatel­y, the conversati­on has begun. Industry groups, think tanks and policymake­rs are tackling issues like economics, law, security, human rights and environmen­tal protection. The leading industrial economies, through the G-7 and G-20, are examining AI’s effect on growth and productivi­ty. The United Nations is debating a ban on fully autonomous lethal weapons.

It took enormous effort to develop, negotiate and finalize a non-proliferat­ion treaty that is still evolving today. Any attempt to control the power of AI will be just as fraught, bumpy and epic. AI is technology that must be controlled. The world reached consensus in the 1960s and reined in an existentia­l risk. It can be done again.

Newspapers in English

Newspapers from United States