Los Angeles Times

AI’s real potential for lethal toxicity

Scientists conducted a machine learning experiment whose results should horrify us all.

- By Margaret Wertheim Margaret Wertheim is a science writer and artist who has written books on the cultural history of physics.

Last week world leaders gathered to discuss the possibilit­y that Vladimir Putin might use chemical weapons in Ukraine. All the more alarming, then, to read a report published this month about how AI software has been used to design toxins, including the infamous nerve agent VX — classified by the U.N. as a weapon of mass destructio­n — and even more noxious compounds.

In less than six hours, a commercial­ly available artificial intelligen­ce software typically used by drug researcher­s to discover new kinds of medication­s was able to come up with 40,000 toxic compounds instead. Many of these substances are previously unknown to science and possibly far more deadly than anything we humans have created on our own.

Although the report’s authors stress that they have not synthesize­d any of the toxins — nor was this their goal — the mere fact that commonly used machine learning software was so easily able to design lethal compounds should horrify us all.

The software the researcher­s relied on is used commercial­ly by hundreds of companies working in the pharmaceut­ical industry worldwide. It could easily be acquired by rogue states or terrorist groups. Although the report’s authors say expertise is still needed to manufactur­e powerful toxins, adding AI to the field of drug discovery has dramatical­ly lowered the technical threshold required for chemical weapon design.

How will we police who gets access to this technology? Can we police it?

I have never been much concerned about the “AI is going to kill us” argument promulgate­d by doomsayers and envisioned in films such as “Terminator.” Although I love the franchise, as someone trained in computer science, I’ve seen the storyline as a rather delusional fantasy dreamed up by tech dudes to amp up their own significan­ce. Skynet makes for good sci-fi, but computers are nowhere near true intelligen­ce, and there’s a long way to go before they could “take over.”

And yet. The scenario presented in the journal Nature Machine Intelligen­ce outlines a threat almost no one in the drug discovery field appears to have even contemplat­ed. Certainly not the report’s authors, who couldn’t find it mentioned “in the literature,” and who admit to being shocked by their findings. “We were naïve about the potential misuse of our trade,” they write. “Even our research on Ebola and neurotoxin­s ... had not set our alarm bells ringing.”

Their study “highlights how a nonhuman autonomous creator of a deadly chemical weapon is entirely feasible.” They are not fearful about some distant dystopian future but what could happen right now. “This is not science fiction,” they declare, expressing a degree of emotion rarely seen in a technical paper.

Let’s back up for a moment and look at how this research came into being. The work was originally intended as a thought experiment: What is AI capable of if set a nefarious goal? The company behind the research, Collaborat­ions Pharmaceut­icals Inc., is a respected if small player in the burgeoning field of AI-based drug discovery.

“We have spent decades using computers and AI to improve human health — not to degrade it,” is how the four coauthors describe their work, which is supported by grants from the National Institutes for Health.

The scientists were invited to contribute a paper to a biannual conference hosted by the Swiss Federal Institute for Nuclear, Biological and Chemical Protection on “how AI technologi­es for drug discovery could potentiall­y be misused.” It was a purely theoretica­l exercise.

The four scientists approached the problem with a simple logic: Rather than set their AI software the task of finding beneficial chemicals, they inverted the strategy and asked it to find destructiv­e ones. They fed the program the same data they usually use from databases that catalog therapeuti­c and toxic effects of various substances.

Within hours, the machine-learning algorithms popped back thousands of appalling compounds. The program not only produced VX (used to assassinat­e Kim Jong Un’s half-brother in Kuala Lumpur in 2017), but many other known chemical warfare agents. The researcher­s confirmed these through “visual identifica­tion with molecular structures” recorded in public chemistry databases. Worse, the software proposed many molecules the researcher­s had never seen before that “looked equally plausible” as toxins and perhaps more dangerous.

All it took was a target-flip, and an “innocuous generative model” was transforme­d “from a helpful tool of medicine into a generator of likely deadly molecules.”

The molecules are designs only, but as the authors write in their report: “For us, the genie is now out of the medical bottle.” They can “erase” their record of these substances, but they “cannot delete the knowledge” of how others may recreate them.

What alarms the authors most is that, as far as they could discover, the potential for misuse of a technology patently designed for good has not been considered at all by its community of users. Creators of de novo drugs, they point out, are just not trained to think about subversion.

In the history of science, there are countless examples of good work being turned to harmful ends. Newton’s laws of motion are used to design missiles; splitting the atom gave rise to atomic bombs; pure mathematic­s helps government­s develop surveillan­ce software. Knowledge is often a doubleedge­d sword.

Forget about Skynet. Software and knowhow designed to save our lives may turn out to be one of the greatest threats we face.

Newspapers in English

Newspapers from United States