The Manila Times

World on the edge of midnight, thanks to AI

-

LAST week, as it does each January, the Bulletin of Atomic Scientists announced the “time” on its famous “Doomsday Clock,” a metaphor to visualize how close the human race is to its own destructio­n, with “midnight” representi­ng the apocalypse. For 2024, the time on the Clock is 90 seconds to midnight, unchanged from 2023, and still the “closest to global catastroph­e” it has ever been.

The Bulletin of Atomic Scientists is a respected journal founded in 1945 by a group led by J. Robert Oppenheime­r and Albert Einstein, and including a number of physicists who had worked on the US Manhattan Project that created the atomic bomb. These modern-day Pandoras knew better than anyone what potential evil they had unleashed on the world, and so dedicated themselves to warning against the dangers of nuclear proliferat­ion, using the symbolic Doomsday Clock as a device to indicate how close they believed humanity was to render itself extinct.

In recent years, the Clock was set at two minutes to midnight in 2019, moved to 100 seconds to midnight in 2022, largely because of Russia’s invasion of Ukraine, and moved again to 90 seconds to midnight last year. In keeping the Clock at 90 seconds this year, the Bulletin of Atomic Scientists said, “Ominous trends continue to point the world toward global catastroph­e,” citing the war in Ukraine, renewed expansion of nuclear arsenals in China, Russia and the US, climate change impacts that are growing in scope and scale, and the existentia­l threat posed by the unchecked use of artificial intelligen­ce (AI).

While all of these threats are indeed worrisome, it is the inclusion of the latter, AI, that is the most surprising and alarming. The Bulletin noted that in 2023, “rapid and worrisome developmen­ts in the life sciences and other disruptive technologi­es accelerate­d, while government­s made only feeble efforts to control them.”

There are three main threats from AI, according to the Bulletin. First, there is the potential for AI to be used in biological warfare or terrorism. “The convergenc­e of emerging artificial intelligen­ce tools and biological technologi­es may radically empower individual­s to misuse biology,” the Bulletin said. “The concern is that large language models enable individual­s who otherwise lack sufficient knowhow to identify, acquire and deploy biological agents that would harm large numbers of humans, animals, plants, and other elements of the environmen­t.”

The second threat from AI, which should come as no surprise to anyone, is its capability to create disinforma­tion. “AI has great potential to magnify disinforma­tion and corrupt the informatio­n environmen­t on which democracy depends. AI-enabled disinforma­tion efforts could be a factor that prevents the world from dealing effectivel­y with nuclear risks, pandemics and climate change,” the Bulletin wrote.

Finally, the Bulletin scientists expressed alarm at the expanding use of AI in military applicatio­ns. “Extensive use of AI is already occurring in intelligen­ce, surveillan­ce, reconnaiss­ance, simulation and training. Of particular concern are lethal autonomous weapons, which identify and destroy targets without human interventi­on. Decisions to put AI in control of important physical systems — in particular, nuclear weapons — could indeed pose a direct existentia­l threat to humanity,” they wrote.

Here in the Philippine­s, we have already seen some evidence of AI being used to create and spread disinforma­tion, and there is some evidence that it may be contributi­ng to the skyrocketi­ng number of cyberattac­ks and online and text scams. The government has acknowledg­ed the risks posed by AI, but unfortunat­ely, its efforts so far to study the potential problems and develop a regulatory response to them, something that must be done rather quickly given how fast the technology is evolving, would fall under the characteri­zation of “feeble” offered by the Bulletin’s statement.

It has long been our position that while AI offers a great many potential benefits and work to develop it further is likely worthwhile, it is also clearly potentiall­y dangerous. Government­s should monitor the developmen­t of AI carefully and take steps to prevent its harmful use. Ideally, that can be done without stifling innovation, which is often an unintended consequenc­e of otherwise well-meaning regulation. But we should also keep in mind that AI is potentiall­y a very useful tool but not an absolute necessity for society and the economy to function productive­ly and safely. We must be open to the possibilit­y that heavy restrictio­ns might be necessary.

Newspapers in English

Newspapers from Philippines