Boston Sunday Globe

The AI doomsaying is counterpro­ductive

Overheated warnings of existentia­l risk undermine the public’s trust in science and make it harder to solve problems created by the technology.

- BY TIM RITCHIE Tim Ritchie is president of the Museum of Science.

Among the casualties in the inflamed discussion about artificial intelligen­ce, the greatest might be public trust in science. Imagine how people feel when they hear the creators of AI testifying in Congress about the threat to humanity posed by their own creation. Imagine their deepening suspicions when they learn these same creators are moving with lightning speed to develop precisely what they say poses so great a risk. For many, this will confirm their distrust of scientists and science itself.

Science is one of the cornerston­es of human progress. It is common ground for humanity. It gives us an objective framework for decision-making and serves as the foundation for technologi­cal progress. It enables us to solve global problems, rise from ignorance, and greet reality as a friend. So, when people distrust science, we all lose. Public health suffers, misinforma­tion soars, climate change goes unaddresse­d, and poverty increases.

In the case of AI, the doomsayers have focused our attention on the wrong thing — the technology itself. The popular discussion has conjured an image of an independen­t being, like Frankenste­in’s monster: autonomous, amoral, and capable of turning itself on humanity without input from anyone. A more realistic view is the one articulate­d by the web pioneer Marc Andreessen: “AI is a computer program like any other — it runs, takes input, processes, and generates output . . . . It is owned by people and controlled by people, like any other technology.”

The doomsayers are afraid of the potential negative impacts of AI for some good reasons. AI can be used by bad actors to spread disinforma­tion, automate warfare, and conduct mass surveillan­ce. It certainly will eliminate some jobs. The sheer computing power it requires may increase global warming. And more attention will be required to keep AI from perpetuati­ng bias and prejudice.

But, if those concerned about AI want to be productive, they should drop the doomsaying. Instead of scaring people away from AI — and science more generally — those who are worried should speak more practicall­y and with more grounding about what AI actually is. They should celebrate its enormous promise for solving problems. They should encourage everyone to use it and to get smarter about it. And they should place its emergence in the context of every technologi­cal innovation that has come before it.

Understand­ing is power. When people start to use AI, they will experience how it can help them get better at many things. They will not think of AI as an independen­t autonomous being but as the tool it is. They will have a sense of its potential to improve the human condition. They will develop confidence in their own ability to spot and resist misinforma­tion and disinforma­tion. They will push developers to do better at creating technologi­es that are more inclusive and unbiased. They will be positioned to help stave off the doomiest scenarios.

To be sure, it will take time for humanity to learn to use AI ethically. Teeing up this conversati­on is the most productive thing the AI experts can do for us. They can help us think about what practical steps we should take to ensure AI is used for good. They can acknowledg­e with humility that they can’t reach this goal alone, but that they need the public to help develop AI in ways that elevate human dignity, safeguard democracy, and accelerate economic developmen­t for all. That conversati­on will be a productive one. And that conversati­on will increase, rather than erode, public confidence in science.

I wonder how the doomsayers would have responded to two women who attended an event about AI that we recently hosted at the Museum of Science in Boston. One is a lawyer, the other is a doctor, and both seemed to be in their 80s. Neither had a good impression of AI. Instead of trying to reason them out of their negative opinions, I simply asked if they were using ChatGPT themselves. They recoiled at the suggestion. So, we asked them to get out their phones and showed them how to use the app. Soon enough they were ready to engage — one for help in writing a condolence letter, and the other for suggestion­s about how to talk with her grandchild­ren. Whatever doubts they have about AI, I’m glad they will hold them as active, informed users.

I hope that if a doomsayer had been the one chatting with the two women at the museum, they too would have celebrated AI’s possibilit­ies for improving ordinary life. For those who believe doomsaying is warranted, the best way forward is to celebrate what is good about AI, to be practical about what is worrisome, and to be plain about what is possible. There’s too much at stake — especially with respect to public trust in science — to do otherwise.

 ?? ADOBE STOCK ??
ADOBE STOCK

Newspapers in English

Newspapers from United States