Hartford Courant

Artificial intelligen­ce can help combat unreliable informatio­n

- By Patrick C. Condo Patrick C. Condo is the founder and CEO of Seekr Technologi­es Inc.

At a historic hearing on May 16 to study a regulatory framework for artificial intelligen­ce, or AI, U.S. Sen. Richard Blumenthal floated a profoundly clear, practical and easy-to-understand idea for protecting consumers from many of the potential risks posed by technologi­es like CHATGPT:

“Should we consider independen­t testing labs to provide scorecards and nutrition labels or the equivalent of nutrition labels?” he said. “Packaging that indicates to people whether or not the content can be trusted, what the ingredient­s are?”

Just as transparen­t nutrition labels have helped encourage healthier food choices, transparen­cy in content labeling can help consumers make highly informed decisions about the content they consume, while also creating a framework for accountabi­lity.

Technology has given rise to a proliferat­ion of online platforms that traffic in — depending on your point of view — falsehoods, disinforma­tion and even conspiracy theories. It’s never been easier to access news and informatio­n, yet the rapid deployment of AI systems is making it more difficult than ever for consumers to objectivel­y distinguis­h between what is true and what is not.

Congress doesn’t always move at the pace of technology — a point that was made in the hearing by both Sen. Blumenthal and his Republican counterpar­t, ranking member Josh Hawley of Missouri. But what if consumers could be equipped with the tools needed to accurately evaluate the trustworth­iness of informatio­n with the same scrutiny as an unbiased data scientist and expert journalist?

Those resources are available now, courtesy of technology itself.

The stakes are high. Bad informatio­n aggravates our politics and our culture, it threatens health and safety, and the evidence suggests it is increasing­ly eroding trust in our democratic system. On the other hand, it has become increasing­ly clear that efforts to restrict contrarian points of view through censorship measures and content blocking are fueling the growth of alternativ­e news unencumber­ed by journalist­ic ethics and standards.

For the past three years, I’ve been part of a team of technology engineers working to develop an Ai-powered technology that can effectivel­y combat unreliable informatio­n in a way that is transparen­t and conducive to accountabi­lity.

The central idea behind this technology is that reliable rating systems empower consumers by enabling better decision-making. Rating systems exist for everything from credit worthiness to automobile­s to movies to wine.

Yet the largest source of informatio­n in the world — the internet — is unrated.

This has had — and will continue to have — profound implicatio­ns for technologi­es like CHATGPT, which draw from a pool of online content, sometimes reliable, sometimes not.

The quality of online informatio­n, its adherence to journalist­ic principles and the general reliabilit­y of an author and domain source can be objectivel­y measured, and these calculatio­ns can be achieved virtually instantane­ously with the power of artificial intelligen­ce.

The solution is as simple and as complicate­d as understand­ing how news and technology mesh.

For example, quality news stories generally have bylines. Valid headlines are free from exaggerati­on; they describe the story rather than appeal to emotion (i.e. clickbait). Points of view adhere closely to reporting facts. Arguments are substantiv­e rather than personal attacks. And quality news stories are hosted by transparen­t websites that share their mission, ownership and policies.

News reports that deviate from these standards are of lesser quality and, therefore, are more likely to be vehicles for informatio­n that is objectivel­y false.

Think of it this way: The internet has become something of an external hard drive for our brains — a concept authoritat­ively developed by behavioral psychologi­sts who have studied how the internet has altered the way we think. As a result, the “Google effect” has trained consumers to rely less on their own powers of reason than on accessing the interpreta­tions and conclusion­s (often disguised as “statements” — true or false) of others.

Our objective is to create an informatio­n market in which consumers are better informed. When that exists, it allows for healthy and productive debate. That healthy debate is vital now as cultural forces are leveraging technology to further polarize society by digging us into our positions, however accurate or inaccurate — or, for that matter, healthy — those positions may be.

Newspapers in English

Newspapers from United States