World on the edge of midnight, thanks to AI
LAST week, as it does each January, the Bulletin of Atomic Scientists announced the “time” on its famous “Doomsday Clock,” a metaphor to visualize how close the human race is to its own destruction, with “midnight” representing the apocalypse. For 2024, the time on the Clock is 90 seconds to midnight, unchanged from 2023, and still the “closest to global catastrophe” it has ever been.
The Bulletin of Atomic Scientists is a respected journal founded in 1945 by a group led by J. Robert Oppenheimer and Albert Einstein, and including a number of physicists who had worked on the US Manhattan Project that created the atomic bomb. These modern-day Pandoras knew better than anyone what potential evil they had unleashed on the world, and so dedicated themselves to warning against the dangers of nuclear proliferation, using the symbolic Doomsday Clock as a device to indicate how close they believed humanity was to render itself extinct.
In recent years, the Clock was set at two minutes to midnight in 2019, moved to 100 seconds to midnight in 2022, largely because of Russia’s invasion of Ukraine, and moved again to 90 seconds to midnight last year. In keeping the Clock at 90 seconds this year, the Bulletin of Atomic Scientists said, “Ominous trends continue to point the world toward global catastrophe,” citing the war in Ukraine, renewed expansion of nuclear arsenals in China, Russia and the US, climate change impacts that are growing in scope and scale, and the existential threat posed by the unchecked use of artificial intelligence (AI).
While all of these threats are indeed worrisome, it is the inclusion of the latter, AI, that is the most surprising and alarming. The Bulletin noted that in 2023, “rapid and worrisome developments in the life sciences and other disruptive technologies accelerated, while governments made only feeble efforts to control them.”
There are three main threats from AI, according to the Bulletin. First, there is the potential for AI to be used in biological warfare or terrorism. “The convergence of emerging artificial intelligence tools and biological technologies may radically empower individuals to misuse biology,” the Bulletin said. “The concern is that large language models enable individuals who otherwise lack sufficient knowhow to identify, acquire and deploy biological agents that would harm large numbers of humans, animals, plants, and other elements of the environment.”
The second threat from AI, which should come as no surprise to anyone, is its capability to create disinformation. “AI has great potential to magnify disinformation and corrupt the information environment on which democracy depends. AI-enabled disinformation efforts could be a factor that prevents the world from dealing effectively with nuclear risks, pandemics and climate change,” the Bulletin wrote.
Finally, the Bulletin scientists expressed alarm at the expanding use of AI in military applications. “Extensive use of AI is already occurring in intelligence, surveillance, reconnaissance, simulation and training. Of particular concern are lethal autonomous weapons, which identify and destroy targets without human intervention. Decisions to put AI in control of important physical systems — in particular, nuclear weapons — could indeed pose a direct existential threat to humanity,” they wrote.
Here in the Philippines, we have already seen some evidence of AI being used to create and spread disinformation, and there is some evidence that it may be contributing to the skyrocketing number of cyberattacks and online and text scams. The government has acknowledged the risks posed by AI, but unfortunately, its efforts so far to study the potential problems and develop a regulatory response to them, something that must be done rather quickly given how fast the technology is evolving, would fall under the characterization of “feeble” offered by the Bulletin’s statement.
It has long been our position that while AI offers a great many potential benefits and work to develop it further is likely worthwhile, it is also clearly potentially dangerous. Governments should monitor the development of AI carefully and take steps to prevent its harmful use. Ideally, that can be done without stifling innovation, which is often an unintended consequence of otherwise well-meaning regulation. But we should also keep in mind that AI is potentially a very useful tool but not an absolute necessity for society and the economy to function productively and safely. We must be open to the possibility that heavy restrictions might be necessary.