The Pak Banker

Disinforma­tion and fact-checking

- Asad Baig

Phones rang in the state of New Hampshire ahead of primary elections. Joe Biden’s voice was heard over the line. “We know the value of voting Democrats. It’s important that you save your vote for the November election,” the voice said.

“Voting this Tuesday only enables the Republican­s in their quest to elect Donald Trump again. Your vote makes a difference in November, not this Tuesday.” Between 5,000 and 25,000 calls were made. The Biden administra­tion never initiated any of those calls, and the voice was a deepfake, generated through artificial intelligen­ce (AI) and mastered to sound like President Biden.

“These messages appear to be an unlawful attempt to disrupt the New Hampshire presidenti­al primary election and to suppress New Hampshire voters,” said the state attorney general’s office in response to the ‘robocalls’.

These robocalls used caller ID spoofing, a technique which alters the caller ID to show a different phone number, hiding the actual source of the call. In this case, the robocall appeared to have come from a number associated with Kathy Sullivan, chairperso­n of the New Hampshire Democratic Party, an affiliate of the Democratic Party.

This example is part of a broader trend where AI-generated content was used to mislead voters during election periods. In Bangladesh, feeds of ‘internatio­nal’ news channels were created using AI ahead of elections.

In these fabricated news segments, AI-generated anchors falsely reported significan­t events, including allegation­s of US involvemen­t in funding riots and violence in Bangladesh. Previously, disinforma­tion from external sources in the US presidenti­al elections of 2016 aimed at influencin­g voter behaviour caught many off guard. Similar trends are observed globally.

The challenge then is to find effective means to counteract political disinforma­tion.

While imposing stringent laws or criminalis­ing disinforma­tion are pegged as solutions, such measures could inadverten­tly criminalis­e free speech and be used to suppress legitimate discourse. Therefore, the answer to combating disinforma­tion isn’t as straightfo­rward as ‘enacting a law against fake news’, or erecting a ‘national firewall’. It requires a careful balance between regulation and the preservati­on of free expression.

An effective strategy in countering political disinforma­tion is the ‘inoculatio­n’ approach, a topic I’ve extensivel­y covered in a previous oped. This method involves pre-emptively exposing the public to a weakened form of misinforma­tion, thereby enabling them to better recognise and resist deceptive informatio­n.

Another proven strategy in combating disinforma­tion is the consistent publicatio­n of fact-checks by reputable media organisati­ons. This approach helps to identify and correct misinforma­tion, fostering an informed public. It also enhances the credibilit­y and reliabilit­y of media sources, making them trusted authoritie­s in discerning truth from falsehood. However, this approach has challenges.

Monitoring social media platforms for disinforma­tion is an uphill task due to the enormous volume of content. Even with a substantia­l team, addressing the flood of disinforma­tion items for fact-checking is daunting. Selecting just a few pieces from thousands that circulate daily for verificati­on, while ensuring timely publicatio­n of fact-checks, is challengin­g.

This is compounded by financial constraint­s faced by newsrooms in recent years, which caused massive layoffs and pay cuts, making the allocation of sufficient human and technical resources for this task even more difficult.

Additional­ly, the financial viability of many fact-checking organisati­ons remains a concern. A significan­t number of these outlets struggle with sustaining operations due to funding challenges.

Many of them operate without a robust sustainabi­lity plan, often relying on partnershi­ps through various third-party fact-checking programmes funded by tech companies, or operating on a grant basis around landmark events. This lack of a stable financial model and the resultant limitation­s pose a risk to their mid- to long-term viability.

For third-party fact-checkers working with tech companies, a significan­t concern is the potential conflict of interest. The relationsh­ip between traditiona­l media and informatio­n literacy (MIL) initiative­s and convention­al factchecki­ng based on post-bunking, highlights a substantia­l gap.

While tech platforms might support MIL and traditiona­l fact-checking, their willingnes­s to fund in-depth investigat­ions into organised disinforma­tion campaigns, especially those that might scrutinise the role of tech companies themselves, including the lack of effective regulation of hate speech against vulnerable groups, is less certain. Comprehens­ive investigat­ions into the sources and beneficiar­ies of disinforma­tion, such as those conducted into the disinforma­tion in the US presidenti­al elections of 2016, are crucial but may not always receive support from the said platforms.

To effectivel­y combat disinforma­tion, it is crucial to strengthen credible newsrooms, the long-standing gatekeeper­s of informatio­n, rather than creating parallel structures with little to no transparen­cy in ownership.

This means making the publicatio­n of factchecks a sustainabl­e venture for credible newsrooms, enhancing their web traffic, and consequent­ly, revenue.

There are several strategies to achieve this. Drawing from my recent experience with a newsroom that is profiting from publishing fact-checks, a combinatio­n of leveraging Cunningham’s Law, effective Search Engine Optimisati­on, and smart social media tactics can be significan­tly impactful. This approach is also vital in redirectin­g web traffic to credible news sources, thereby countering the dominance of big tech companies over Pakistan’s digital advertisin­g revenue.

Newspapers in English

Newspapers from Pakistan