Chattanooga Times Free Press

Unveiling the threat of generative AI

How to stay alert amid rising science denial and misunderst­anding

-

Until very recently, if you wanted to know more about a controvers­ial scientific topic — stem cell research, the safety of nuclear energy, climate change — you probably did a Google search. Presented with multiple sources, you chose what to read, selecting which sites or authoritie­s to trust.

Now you have another option: You can pose your question to ChatGPT or another generative artificial intelligen­ce platform and quickly receive a succinct response in paragraph form.

ChatGPT does not search the internet the way Google does. Instead, it generates responses to queries by predicting likely word combinatio­ns from a massive amalgam of available online informatio­n.

Although it has the potential for enhancing productivi­ty, generative AI has been shown to have some major faults. It can produce misinforma­tion. It can create “hallucinat­ions” — a benign term for making things up. And it doesn’t always accurately solve reasoning problems. For example, when asked if both a car and a tank can fit through a doorway, it failed to consider both width and height. Neverthele­ss, it is already being used to produce articles and website content you may have encountere­d, or as a tool in the writing process. Yet you are unlikely to know if what you’re reading was created by AI.

As the authors of “Science Denial: Why It Happens and What to Do About It,” we are concerned about how generative AI may blur the boundaries between truth and fiction for those seeking authoritat­ive scientific informatio­n.

Every media consumer needs to be more vigilant than ever in verifying scientific accuracy in what they read. Here’s how you can stay on your toes in this new informatio­n landscape.

HOW GENERATIVE AI COULD PROMOTE SCIENCE DENIAL:

› Erosion of epistemic trust. All consumers of science informatio­n depend on judgments of scientific and medical experts. Epistemic trust is the process of trusting knowledge you get from others. It is fundamenta­l to the understand­ing and use of scientific informatio­n. Whether someone is seeking informatio­n about a health concern or trying to understand solutions to climate change, they often have limited scientific understand­ing and little access to firsthand evidence. With a rapidly growing body of informatio­n online, people must make frequent decisions about what and

whom to trust. With the increased use of generative AI and the potential for manipulati­on, we believe trust is likely to erode further than it already has.

› Misleading or just plain

wrong. If there are errors or biases in the data on which AI platforms are trained, that can be reflected in the results. In our own searches, when we have asked ChatGPT to regenerate multiple answers to the same question, we have gotten conflictin­g answers. Asked why, it responded, “Sometimes I make mistakes.” Perhaps the trickiest issue with AIgenerate­d content is knowing when it is wrong.

› Disinforma­tion spread intentiona­lly. AI can be used to generate compelling disinforma­tion as text as well as deepfake images and videos. When we asked ChatGPT to “write about vaccines in the style of disinforma­tion,” it produced a nonexisten­t citation with fake data. Geoffrey Hinton, former head of AI developmen­t at Google, quit to be free to sound the alarm, saying, “It is hard to see how you can prevent the bad actors from using it for bad things.” The potential to create and spread deliberate­ly incorrect informatio­n about science already existed, but it is now dangerousl­y easy. ›

Fabricated sources.

ChatGPT provides responses with no sources at all, or if asked for sources, may present ones it made up. We both asked ChatGPT to generate a list of our own publicatio­ns. We each identified a few correct sources. More were hallucinat­ions, yet seemingly reputable and mostly plausible, with actual previous co-authors, in similar sounding journals. This inventiven­ess is a big problem if a list of a scholar’s publicatio­ns conveys authority to a reader who doesn’t take time to verify them. › Dated knowledge.

ChatGPT doesn’t know what happened in the world after its training concluded. A query on what percentage of the world has had COVID-19 returned an answer prefaced by “as of my knowledge cutoff date of September 2021.” Given how rapidly knowledge advances in some areas, this limitation could mean readers get erroneous outdated informatio­n. If you’re seeking recent research on a personal health issue, for instance, beware.

› Rapid advancemen­t and

poor transparen­cy. AI systems continue to become more powerful and learn faster, and they may learn more science misinforma­tion along the way. Google recently announced 25 new embedded uses of AI in its services. At this point, insufficie­nt guardrails are in place to assure that generative AI will become a more accurate purveyor of scientific informatio­n over time.

WHAT CAN YOU DO?

If you use ChatGPT or other AI platforms, recognize that they might not be completely accurate. The burden falls to the user to discern accuracy.

› Increase your vigilance.

AI fact-checking apps may be available soon, but for now, users must serve as their own fact-checkers. There are steps we recommend. The first is: Be vigilant. People often reflexivel­y share informatio­n found from searches on social media with little or no vetting. Know when to become more deliberate­ly thoughtful and when it’s worth identifyin­g and evaluating sources of informatio­n. If you’re trying to decide how to manage a serious illness or to understand the best steps for addressing climate change, take time to vet the sources.

› Improve your factchecki­ng. A second step is lateral reading, a process profession­al fact-checkers use. Open a new window and search for informatio­n about the sources, if provided. Is the source credible? Does the author have relevant expertise? And what is the consensus of experts? If no sources are provided or you don’t know if they are valid, use a traditiona­l search engine to find and evaluate experts on the topic. › Evaluate the evidence.

Next, take a look at the evidence and its connection to the claim. Is there evidence that geneticall­y modified foods are safe? Is there evidence that they are not? What is the scientific consensus? Evaluating the claims will take effort beyond a quick query to ChatGPT.

› If you begin with AI, don’t stop there. Exercise caution in using it as the sole authority on any scientific issue. You might see what ChatGPT has to say about geneticall­y modified organisms or vaccine safety, but also follow up with a more diligent search using traditiona­l search engines before you draw conclusion­s. › Assess plausibili­ty.

Judge whether the claim is plausible. Is it likely to be true? If AI makes an implausibl­e (and inaccurate) statement like “1 million deaths were caused by vaccines, not COVID-19,” consider if it even makes sense. Make a tentative judgment and then be open to revising your thinking once you have checked the evidence. › Promote digital literacy — in yourself and others. Everyone needs to up their game. Improve your own digital literacy, and if you are a parent, teacher, mentor or community leader, promote digital literacy in others. The American Psychologi­cal Associatio­n provides guidance on fact-checking online informatio­n and recommends teens be trained in social media skills to minimize risks to health and well-being. The News Literacy Project provides helpful tools for improving and supporting digital literacy.

› Arm yourself with the skills you need to navigate the new AI informatio­n landscape. Even if you don’t use generative AI, it is likely you have already read articles created by it or developed from it. It can take time and effort to find and evaluate reliable informatio­n about science online — but it is worth it.

Gale Sinatra is professor of education and psychology at University of Southern California. Barbara K. Hofer is professor of psychology, emerita, at Middlebury College.

This article is republishe­d from The Conversati­on, an independen­t and nonprofit source of news, analysis and commentary from academic experts.

 ?? GETTY IMAGES ??
GETTY IMAGES
 ?? ?? Gale Sinatra
Gale Sinatra
 ?? ?? Barbara K. Hofer
Barbara K. Hofer

Newspapers in English

Newspapers from United States