Boston Sunday Globe

AI Is Muddying the Truth. The Way to Fix It Is Centuries Old.

- BY ADAM BLY AND AMY BRAND

Stop, for a moment, and consider the staggering amount of informatio­n you need to get through a single day while making sense of the world. We are constantly seeking answers: Is this pill safe for me to take? What are the main causes of climate change? Is the defendant guilty beyond a reasonable doubt?

To make informed decisions, we need reliable sources of informatio­n — a tall task in an age of misinforma­tion. The waters have become even murkier with the advent of generative artificial intelligen­ce — programs able to produce realistic and persuasive counterfei­ts of scenes that don’t exist, events that never happened, and arguments nobody made.

Not only does our concept of truth feel more slippery today, but the long-establishe­d ways we arrive at insights and decisions are being compromise­d. Having worked in the data and informatio­n sector for a combined five decades, we are very concerned that, left unchecked, the rapid rollout of generative AI could erode the epistemolo­gical foundation­s of society — that is, the ways in which we construct knowledge. As the cognitive scientist Douglas Hofstadter wrote in The Atlantic, it could well “undermine the very nature of truth on which our society — and I mean all of human society — is based.”

The White House’s recent announceme­nt that it has secured voluntary “commitment­s” from a handful of companies to improve the safety of their AI technology is a start, but does not address the fundamenta­l risk humanity faces: the end of our ability to discern truth. As our society faces existentia­l crises — from climate change and pandemic preparedne­ss to systemic racism and the fragility of democracy — we urgently need to protect trust in evidence-based decision making.

Among the White House’s proposals to regulate generative AI is a watermarki­ng system — a step in the right direction, but one that falls far short of enforcing transparen­cy and verifiabil­ity. Should this actually be adopted, some will see the AI watermark and reflexivel­y discount the content as “fake news”; some won’t see the watermark at all; and others — scrolling through their social media feeds or otherwise trying to digest massive amounts of informatio­n — will trust the output purely out of convenienc­e.

More fundamenta­lly, the question of whether a news story or journal article is AI-generated or not is distinct from whether that content is factbased or credible. To truly enhance trust and support in evidence-based decisions, the public (and our regulatory agencies) needs an audit-trail back to underlying data sources, methodolog­ies, and prompts. We need to be able to answer questions like: How was the conclusion arrived at? How was the diagnosis made?

Despite its well-known flaws, the centurieso­ld scientific method, and its counterpar­ts across law, medicine, and journalism, is the best approach humanity has found to arrive at testable, reliable — and revisable — conclusion­s and prediction­s about the world. We observe, hypothesiz­e, test, analyze, report, and repeat our way to a truer understand­ing of the world and more effective solutions for how to improve it.

Decision making in modern, democratic society is underpinne­d by this method. Tools such as peer review in scientific journals and fact-checking ensure meritocrac­y, reliabilit­y, and self-correction. Randomized controlled trials ensure effectiven­ess; jurisprude­nce takes legal precedents into account. Also built into the scientific method is humility about the limitation­s of what is knowable by a given means at a given point in time, and honesty about the confidence we can place in any conclusion based on how it was arrived at.

An answer generated by an AI chatbot that is trained to sound authoritat­ive but has no actual observed, experience­d, or measured model of the world to align with — and is unable to cite its sources or explain how it used those sources

— violates these principles and standards. If you haven’t yet experience­d an AI hallucinat­ion, just ask a chatbot to create a bio of you. It is likely to attribute work to you that you had no hand in, and cities of residence where you never lived.

There is also an important historical relationsh­ip between how we know and how we govern. It can be argued that the reason and logic that defined the Scientific Revolution in the 16th and 17th centuries was also the foundation for democratic thought in Europe, and later, the Declaratio­n of Independen­ce. At this alreadyper­ilous moment for democracy around the world, we should at least ponder this link.

Some might argue that letting generative AI technologi­es run unchecked is the right thing in the name of technologi­cal progress; the path to artificial general intelligen­ce may produce breakthrou­ghs that reveal deeper truths about the universe or better solutions to the world’s challenges. But that should be society’s assessment to make — not left to a handful of corporatio­ns — before these technologi­es are more widely deployed.

We must build trust and transparen­cy into any AI system that is intended to support decision making. We could train AI systems on source material that adheres to society’s highest standards of trust, such as peer-reviewed scientific literature, corrected for retraction­s. We could design them to extract facts and findings about the world from reliable source material and use them exclusivel­y to generate answers. We could require that they cite their sources and show their work, and be honest about their limitation­s and bias, reflecting uncertaint­y back to the user. Efforts are already underway to build these mechanisms into AI, with the hope they can actually level up society’s expectatio­ns for transparen­cy and accountabi­lity.

Evidence-based decision making should immediatel­y become a principle of nascent internatio­nal AI governance efforts, especially as countries with diverse models of governance introduce AI regulation­s. Appropriat­e governance need not compromise scientific and technologi­cal progress.

We should also keep in mind that the methods and legitimacy of science have been — and continue to be — appropriat­ed for scientific racism. As we consider how decisions are made in both the private and public sectors — from those about hiring and college admissions to government policies — we must consider the sources we base them on. Modern society is full of historical bias, discrimina­tion, and subjugatio­n. AI should be used to shine awareness on these inequities—not calcify them further into the training data of automated and impenetrab­le decisions for decades to come.

We have a oncein-a-century opportunit­y to define, collective­ly, a more rational, explainabl­e, systemic, inclusive, and equitable basis for decision making — powered by AI. Perhaps we can even chart a future in which AI helps inoculate humanity against our own fallibilit­y, gullibilit­y, and bias in the interest of a fairer society and healthier public sphere.

Let’s not waste this moment.

Not only does our concept of truth feel more slippery today, but the long-establishe­d ways we arrive at insights and decisions are being compromise­d.

 ?? ??

Newspapers in English

Newspapers from United States