What ‘Oppenheimer’ can teach us about AI
Christopher Nolan, director of the blockbuster movie “Oppenheimer,” has said he sees similarities between the Manhattan Project and today’s frontiers of technology, such as the development of artificial intelligence. While an in-depth analysis would require a much longer space, here are a few examples I’ll offer:
Both the A-bomb and AI involve unknown-unknowns: the unknown scope of destruction of the bomb prior to its first test, and the as-yet unknown consequences of engaging an AI system that thinks exponentially faster than its human programmers.
Both inventions can have long-range impacts. The A-bomb caused a milestone change in military and diplomatic strategies; AI may well require milestone changes in regulatory strategies, which I’ll address further.
The contrasts of the two are worthy of note as well.
The Manhattan Project was a highly controlled government undertaking with tight security for maximum secrecy. AI is essentially an entrepreneurial venture rushing to gain market share and maximize profits for the companies involved.
The A-bomb required long-lead planning for scarce resources and events — uranium mining and enrichment, bomb design, assembly and testing — accomplished by necessity on a punishing schedule. AI, as an iterative-learning system, is a self-improving programming exercise using open-source and AI-generated code that morphs quickly. Tristan Harris, co-founder of the nonprofit Center for Humane Technology, has noted ominously that humans have no previous experience with exponential change, with the implication that we may not fully grasp that concept until we are consumed by it.
So how do we control the two genies we have released from the bottle?
During the late 1970s, I wrote a series of editorials on the philosophy of “mutual-assured destruction” and why it made sense under the assumptions of the time — that the few nuclear powers, having so much to lose in a nuclear war, would never be inclined to start one.
That reasoning worked well for four decades, but its assumptions have since broken down. Small nuclear dictatorships have nothing to lose.
Tactical nukes with much lower destructive potential turn a once-binary choice into a slippery slope, making their use possible if not probable.
As for AI, history shows we tend to wait for a problem to become visible before reluctantly regulating it. That foot-dragging approach won’t work when potential destructive consequences can be sudden and severe.
Furthermore, this traditional model assumes government regulators possess the high-level AI skills required to craft solutions as fast as the tech industry creates problems, for which I give low odds of success. Instead, a paradigm shift can be had by asking questions such as:
What if AI projects were governed by the medical ethic of “first do no harm” instead of the business ethic of “first make a profit”?
What if AI were treated as an alien virus: Develop each product in an isolated environment, fully test its effects, then certify it by an independent agency before releasing it for general use — as the FDA and CDC do for vaccines?
What if initial AI projects were authorized only for limited low-risk objectives, such as performing medical diagnostics or complex modeling of climate change, material resources, economic trends, population demographics, etc.?
Finally, what if we begin AI development by asking a different question: not what can AI do, but what role should it be given to support its developers? Should it act as an “adviser,” providing options to human decision-makers? Could it act as a “mentor,” teaching developers how it came up with innovative solutions and strategies?
Ask a different question to get a different, and perhaps better, answer.