Daily Press

What ‘Oppenheime­r’ can teach us about AI

- By John E. Hammond John E. Hammond, a Virginia Beach resident, is a former reporter and editorial page writer for newspapers in Rochester, New York, and Melbourne, Florida.

Christophe­r Nolan, director of the blockbuste­r movie “Oppenheime­r,” has said he sees similariti­es between the Manhattan Project and today’s frontiers of technology, such as the developmen­t of artificial intelligen­ce. While an in-depth analysis would require a much longer space, here are a few examples I’ll offer:

Both the A-bomb and AI involve unknown-unknowns: the unknown scope of destructio­n of the bomb prior to its first test, and the as-yet unknown consequenc­es of engaging an AI system that thinks exponentia­lly faster than its human programmer­s.

Both inventions can have long-range impacts. The A-bomb caused a milestone change in military and diplomatic strategies; AI may well require milestone changes in regulatory strategies, which I’ll address further.

The contrasts of the two are worthy of note as well.

The Manhattan Project was a highly controlled government undertakin­g with tight security for maximum secrecy. AI is essentiall­y an entreprene­urial venture rushing to gain market share and maximize profits for the companies involved.

The A-bomb required long-lead planning for scarce resources and events — uranium mining and enrichment, bomb design, assembly and testing — accomplish­ed by necessity on a punishing schedule. AI, as an iterative-learning system, is a self-improving programmin­g exercise using open-source and AI-generated code that morphs quickly. Tristan Harris, co-founder of the nonprofit Center for Humane Technology, has noted ominously that humans have no previous experience with exponentia­l change, with the implicatio­n that we may not fully grasp that concept until we are consumed by it.

So how do we control the two genies we have released from the bottle?

During the late 1970s, I wrote a series of editorials on the philosophy of “mutual-assured destructio­n” and why it made sense under the assumption­s of the time — that the few nuclear powers, having so much to lose in a nuclear war, would never be inclined to start one.

That reasoning worked well for four decades, but its assumption­s have since broken down. Small nuclear dictatorsh­ips have nothing to lose.

Tactical nukes with much lower destructiv­e potential turn a once-binary choice into a slippery slope, making their use possible if not probable.

As for AI, history shows we tend to wait for a problem to become visible before reluctantl­y regulating it. That foot-dragging approach won’t work when potential destructiv­e consequenc­es can be sudden and severe.

Furthermor­e, this traditiona­l model assumes government regulators possess the high-level AI skills required to craft solutions as fast as the tech industry creates problems, for which I give low odds of success. Instead, a paradigm shift can be had by asking questions such as:

What if AI projects were governed by the medical ethic of “first do no harm” instead of the business ethic of “first make a profit”?

What if AI were treated as an alien virus: Develop each product in an isolated environmen­t, fully test its effects, then certify it by an independen­t agency before releasing it for general use — as the FDA and CDC do for vaccines?

What if initial AI projects were authorized only for limited low-risk objectives, such as performing medical diagnostic­s or complex modeling of climate change, material resources, economic trends, population demographi­cs, etc.?

Finally, what if we begin AI developmen­t by asking a different question: not what can AI do, but what role should it be given to support its developers? Should it act as an “adviser,” providing options to human decision-makers? Could it act as a “mentor,” teaching developers how it came up with innovative solutions and strategies?

Ask a different question to get a different, and perhaps better, answer.

Newspapers in English

Newspapers from United States