Boston Sunday Globe

To guide AI rules, take a page from nuclear weapon safety standards

-

Two pieces in last Sunday’s edition of the Globe addressed the problems of controllin­g artificial intelligen­ce: “US regulates cars, radio, and TV. When will it regulate AI?” Page A12; and “Is AI really as good as advertised?” Ideas. New York Times writer Ian Prasad Philbrick and Globe Ideas contributo­r Elizabeth Svoboda each described real and depressing concerns about AI, but I was surprised that neither cited Max Tegmark, a professor of physics at MIT. Tegmark suggests feasible solutions, such as developmen­t of safety standards, which he is pursuing through his Future of Life Institute. However, it seems like a slow process.

Still, as an old nuclear weapons safety guy, I am taken with the arguments expressed by Philbrick, Svoboda, and Tegmark. I struggled for more than 20 years reviewing safety measures in nuclear weapon systems and then, upon finding lapses, having to persuade reluctant program managers to make changes necessary to meet Defense Department safety standards. No nuclear weapon systems are authorized to be used in fleet or aircraft units until they are found by the Nuclear Weapon System Safety Group to be in compliance. Defense Department safety standards are the primary reason for the unparallel­ed safety record of the US nuclear weapon stockpile since the 1960s.

System-specific safety standards also can provide the desired control over the design and use of safe AI systems. A small group could develop a useful definition of standards in a relatively short time and then let others comment. It’s an understate­ment that the more people initially involved, the more time would be needed to reach agreement, so it would be crucial to keep the group number manageable — say, six or seven members.

Then would come the organizati­onal challenge: Who runs the show?

CHESTER A. KUNZ JR.

Franklin

The writer is a retired Navy commander.

Newspapers in English

Newspapers from United States