The AI doomsaying is counterproductive
Overheated warnings of existential risk undermine the public’s trust in science and make it harder to solve problems created by the technology.
Among the casualties in the inflamed discussion about artificial intelligence, the greatest might be public trust in science. Imagine how people feel when they hear the creators of AI testifying in Congress about the threat to humanity posed by their own creation. Imagine their deepening suspicions when they learn these same creators are moving with lightning speed to develop precisely what they say poses so great a risk. For many, this will confirm their distrust of scientists and science itself.
Science is one of the cornerstones of human progress. It is common ground for humanity. It gives us an objective framework for decision-making and serves as the foundation for technological progress. It enables us to solve global problems, rise from ignorance, and greet reality as a friend. So, when people distrust science, we all lose. Public health suffers, misinformation soars, climate change goes unaddressed, and poverty increases.
In the case of AI, the doomsayers have focused our attention on the wrong thing — the technology itself. The popular discussion has conjured an image of an independent being, like Frankenstein’s monster: autonomous, amoral, and capable of turning itself on humanity without input from anyone. A more realistic view is the one articulated by the web pioneer Marc Andreessen: “AI is a computer program like any other — it runs, takes input, processes, and generates output . . . . It is owned by people and controlled by people, like any other technology.”
The doomsayers are afraid of the potential negative impacts of AI for some good reasons. AI can be used by bad actors to spread disinformation, automate warfare, and conduct mass surveillance. It certainly will eliminate some jobs. The sheer computing power it requires may increase global warming. And more attention will be required to keep AI from perpetuating bias and prejudice.
But, if those concerned about AI want to be productive, they should drop the doomsaying. Instead of scaring people away from AI — and science more generally — those who are worried should speak more practically and with more grounding about what AI actually is. They should celebrate its enormous promise for solving problems. They should encourage everyone to use it and to get smarter about it. And they should place its emergence in the context of every technological innovation that has come before it.
Understanding is power. When people start to use AI, they will experience how it can help them get better at many things. They will not think of AI as an independent autonomous being but as the tool it is. They will have a sense of its potential to improve the human condition. They will develop confidence in their own ability to spot and resist misinformation and disinformation. They will push developers to do better at creating technologies that are more inclusive and unbiased. They will be positioned to help stave off the doomiest scenarios.
To be sure, it will take time for humanity to learn to use AI ethically. Teeing up this conversation is the most productive thing the AI experts can do for us. They can help us think about what practical steps we should take to ensure AI is used for good. They can acknowledge with humility that they can’t reach this goal alone, but that they need the public to help develop AI in ways that elevate human dignity, safeguard democracy, and accelerate economic development for all. That conversation will be a productive one. And that conversation will increase, rather than erode, public confidence in science.
I wonder how the doomsayers would have responded to two women who attended an event about AI that we recently hosted at the Museum of Science in Boston. One is a lawyer, the other is a doctor, and both seemed to be in their 80s. Neither had a good impression of AI. Instead of trying to reason them out of their negative opinions, I simply asked if they were using ChatGPT themselves. They recoiled at the suggestion. So, we asked them to get out their phones and showed them how to use the app. Soon enough they were ready to engage — one for help in writing a condolence letter, and the other for suggestions about how to talk with her grandchildren. Whatever doubts they have about AI, I’m glad they will hold them as active, informed users.
I hope that if a doomsayer had been the one chatting with the two women at the museum, they too would have celebrated AI’s possibilities for improving ordinary life. For those who believe doomsaying is warranted, the best way forward is to celebrate what is good about AI, to be practical about what is worrisome, and to be plain about what is possible. There’s too much at stake — especially with respect to public trust in science — to do otherwise.