Who will be our AI watchdogs?
When someone approaches us trying to convince us of something that we think is a bit dodgy, our reaction has historically been, “show me the evidence” of whatever it is that they are trying to convince us of. Until very recently, most of us have felt that, when presented with such concrete evidence, we have been in a position to assess its credibility and work out whether we believe the claims or not.
Of course, there have always been some limits to this approach, such as Josef Stalin pulling out a photo from his desk and saying, “Trotsky, no, I’ve never met anyone of that name, and anyway, if he did exist, he would obviously have been in this photo here, wouldn’t he”.
Apart from such situations, it hasn’t been unreasonable to trust our ability to judge such evidence up until now. However, the world in which we could operate like that is about to be stubbed out. We are not quite there yet, but in this AI age, we are on the edge of an era where many things – documents, audio, still images, and video – can be easily manipulated to tell whatever seemingly credible story anyone wants to tell, regardless of its truth or falsity.
The latest example we have is the photograph released by the Princess of Wales to reassure the public that she had been recovering from a recent operation, which backfired as media picked up that the image had been manipulated.
II think that it’s an example of the traditional media doing a good job – one that will become even more essential in the coming AI age. In a situation where any evidence can be fabricated, we are going to need what I like to think of as AI watchdogs. I mean this in two senses: first, people watching out for AI fabrication of evidence. Second, the use of other AI systems – ones working for us – ferreting out where AI fabrication is taking place.
Our belief that we can readily figure out what evidence is trustworthy is the basis for how we currently operate on social media. In our social media feeds, we see a constant stream of posts, often from strangers fed to us by algorithms, linking to various pieces of “evidence” regarding whatever it is that they are claiming is the case.
Now, in a situation where any piece of evidence can be easily fabricated, it seems that this type of paradigm may well break down because it relies on laypeople like us being able to detect what is and is not credible evidence.
In the face of this developing situation, I think that we are likely going to have to withdraw into what I call firewalled communities.
These communities would not be completely cut off, but they would only accept incoming information from untrusted strangers if it has been validated in some way by a third party. Such communities would have information boundaries patrolled by AI watchdogs.
I know it’s unfortunate, because a lot of people are already concerned about the fact that we are already functioning in separate information silos, but regardless of its downsides, I think that the problem of AI evidence fabrication may well force us in this direction.
So, who are likely candidates for the role of AI watchdogs? I think that, as illustrated by the Princess and the Picture story, the traditional media are one of the parties in a good position to play this role.
Unlike some random stranger on social media, they have a reputation to protect, commercial incentives to protect it, and are likely to be critiqued by other established media if they don’t do a good job of assessing the credibility of the evidence they provide.
There are also likely to be commercial incentives for entrepreneurs to get into the field of validating information and evidence. In addition, I think that AI watchdogs in the second sense – AI systems working for us – will ultimately be involved in patrolling the information borders of the firewalled communities that I think may emerge.
It’s easy to be pessimistic about the prospect of a flood of fabricated information coming our way. However, I think that the jury is still out on how this will all develop.
It is definitely a serious threat to the integrity of the information ecosystem. However, we need to remember that a while ago, many of us thought that email would sink beneath the burden of endless spam. In the event protections were put in place, and email is still usable.
So we are going to face an arms race between the AI watchdogs, both human and AI-powered, and those who want to serve us up a sea of fabricated junk to reduce our ability to know what is and what is not true.
Given the parlous state of the economics of traditional media in the social media age, building on their disciplines around validating information that they have developed over many years may provide an increasingly important source of value that the established media can use to help bolster their current position.