The Press

Who will be our AI watchdogs?

- Paul Duignan Paul Duignan is a commentato­r on AI, and a psychologi­st and organisati­onal and social strategist. He has just written a book, Friending AI, that discusses AI watchdogs and other likely developmen­ts in AI.

When someone approaches us trying to convince us of something that we think is a bit dodgy, our reaction has historical­ly been, “show me the evidence” of whatever it is that they are trying to convince us of. Until very recently, most of us have felt that, when presented with such concrete evidence, we have been in a position to assess its credibilit­y and work out whether we believe the claims or not.

Of course, there have always been some limits to this approach, such as Josef Stalin pulling out a photo from his desk and saying, “Trotsky, no, I’ve never met anyone of that name, and anyway, if he did exist, he would obviously have been in this photo here, wouldn’t he”.

Apart from such situations, it hasn’t been unreasonab­le to trust our ability to judge such evidence up until now. However, the world in which we could operate like that is about to be stubbed out. We are not quite there yet, but in this AI age, we are on the edge of an era where many things – documents, audio, still images, and video – can be easily manipulate­d to tell whatever seemingly credible story anyone wants to tell, regardless of its truth or falsity.

The latest example we have is the photograph released by the Princess of Wales to reassure the public that she had been recovering from a recent operation, which backfired as media picked up that the image had been manipulate­d.

II think that it’s an example of the traditiona­l media doing a good job – one that will become even more essential in the coming AI age. In a situation where any evidence can be fabricated, we are going to need what I like to think of as AI watchdogs. I mean this in two senses: first, people watching out for AI fabricatio­n of evidence. Second, the use of other AI systems – ones working for us – ferreting out where AI fabricatio­n is taking place.

Our belief that we can readily figure out what evidence is trustworth­y is the basis for how we currently operate on social media. In our social media feeds, we see a constant stream of posts, often from strangers fed to us by algorithms, linking to various pieces of “evidence” regarding whatever it is that they are claiming is the case.

Now, in a situation where any piece of evidence can be easily fabricated, it seems that this type of paradigm may well break down because it relies on laypeople like us being able to detect what is and is not credible evidence.

In the face of this developing situation, I think that we are likely going to have to withdraw into what I call firewalled communitie­s.

These communitie­s would not be completely cut off, but they would only accept incoming informatio­n from untrusted strangers if it has been validated in some way by a third party. Such communitie­s would have informatio­n boundaries patrolled by AI watchdogs.

I know it’s unfortunat­e, because a lot of people are already concerned about the fact that we are already functionin­g in separate informatio­n silos, but regardless of its downsides, I think that the problem of AI evidence fabricatio­n may well force us in this direction.

So, who are likely candidates for the role of AI watchdogs? I think that, as illustrate­d by the Princess and the Picture story, the traditiona­l media are one of the parties in a good position to play this role.

Unlike some random stranger on social media, they have a reputation to protect, commercial incentives to protect it, and are likely to be critiqued by other establishe­d media if they don’t do a good job of assessing the credibilit­y of the evidence they provide.

There are also likely to be commercial incentives for entreprene­urs to get into the field of validating informatio­n and evidence. In addition, I think that AI watchdogs in the second sense – AI systems working for us – will ultimately be involved in patrolling the informatio­n borders of the firewalled communitie­s that I think may emerge.

It’s easy to be pessimisti­c about the prospect of a flood of fabricated informatio­n coming our way. However, I think that the jury is still out on how this will all develop.

It is definitely a serious threat to the integrity of the informatio­n ecosystem. However, we need to remember that a while ago, many of us thought that email would sink beneath the burden of endless spam. In the event protection­s were put in place, and email is still usable.

So we are going to face an arms race between the AI watchdogs, both human and AI-powered, and those who want to serve us up a sea of fabricated junk to reduce our ability to know what is and what is not true.

Given the parlous state of the economics of traditiona­l media in the social media age, building on their discipline­s around validating informatio­n that they have developed over many years may provide an increasing­ly important source of value that the establishe­d media can use to help bolster their current position.

 ?? ?? The release of various generative AI tools into the public domain has prompted warnings of an overload of hard-to-detect misinforma­tion and disinforma­tion flooding social media channels.
The release of various generative AI tools into the public domain has prompted warnings of an overload of hard-to-detect misinforma­tion and disinforma­tion flooding social media channels.

Newspapers in English

Newspapers from New Zealand