South China Morning Post

A.I. DEEPFAKES OF STARS’ POLITICAL VIEWS SURFACE

Manipulate­d videos of celebritie­s Aamir Khan and Ranbir Kapoor criticisin­g prime minister fuel fears over disinforma­tion and technology misuse

- Kaisar Andrabi

Fears over deepfake videos have again surfaced in India amid an election that will last until June, with clips this time showing Bollywood actors declaring their political views.

The trend underscore­s the pervasive use of artificial intelligen­ce (AI) technology to manipulate narratives.

Popular actors Aamir Khan and Ranbir Kapoor were pictured saying Prime Minister Narendra Modi had failed to keep campaign promises and address critical economic issues during his two terms.

The clips end with the opposition Congress election symbol and slogan: “Vote for Justice, Vote for Congress”.

The celebritie­s, however, have denied involvemen­t in the videos that experts say put a great burden on the Indian public to discern fact from fiction, in a society where opinion could be easily influenced by cult culture among those untrained in critical thinking.

Another entertaine­r, Ranveer Singh, was also shown in a clip endorsing a political party.

His father filed a complaint against an X (formerly Twitter) user, who was booked for misuse of technology and purpose of harming the person’s reputation.

Indian fact-checking platform BOOM analysed Singh’s video using itisaar, a deepfake detection tool developed by the Indian Institute of Technology Jodhpur, determinin­g the content contained an AI voice clone.

“Keeping in mind how disinforma­tion is already one of the biggest issues the country is facing, the introducti­on of AI-led disinforma­tion worsens an already bad situation significan­tly,” said Archis Chowdhury, a senior fact-checker and correspond­ent at BOOM.

Namrata Maheshwari, senior policy counsel at Access Now, a global digital rights organisati­on, said the exploitati­on or fraudulent use of any person’s identity should be taken seriously.

“During sensitive periods such as elections, the need to identify such misuse and take steps to rectify and prevent it is even more urgent,” Maheshwari said.

“Political parties should not use any harmful misinforma­tion or disinforma­tion in their campaigns, regardless of whether it is AI-generated,” she said.

She noted that it was a “disservice to society” when the political groups or their members spread manipulate­d content or hate speech on social media without checking the veracity of the message.

Maheshwari said people were also inundated with informatio­n during the election season, and the obligation of verifying the facts should not fall squarely on the voter.

She said AI had compounded existing issues with disinforma­tion by reducing the time and cost involved in producing harmful content. “Algorithms used by social media platforms, designed to rapidly circulate eyeball-grabbing content, are equally to blame for the wide reach of misleading, manipulate­d media. So at least some of the solutions need to be aimed at controllin­g disseminat­ion.”

Nirali Bhatia, a Mumbai-based psychologi­st who helps clients deal with cyberbully­ing, said Indian celebritie­s’ political views had a significan­t impact on voters. “If their ideologies align with or oppose a voter, the effect is profound. Unfortunat­ely, this influence isn’t always positive. Even after debunking fake videos, doubts linger, leaving individual­s questionin­g what to believe.”

She cautioned that conversati­ons sparked by fake clips could be pervasive, especially during the election campaign. “Trust is lacking, leading to a growing burden of determinin­g truth from falsehood.”

However, with advancing technology and manipulati­on of AI, the responsibi­lity of discerning truth falls on individual­s.

Mishi Choudhary, a digital lawyer and online civil liberties activist, said that although there was not much empirical data as AI-detection tools were relatively new, historical evidence suggested that manipulate­d media had a wide and deep impact particular­ly when used by political actors at times such as elections.

“Companies are making some promises like labelling AI-generated content, but these systems aren’t ready yet.

“Companies are critical here, but we also need thoughtful regulation [requiring] those who deploy these systems to label and disclose that AI has been used,” she said.

The … AI-led disinforma­tion worsens an already bad situation significan­tly ARCHIS CHOWDHURY, FACT-CHECKER

 ?? ?? Voters at a polling station in Barmer, Rajasthan, on Friday.
Voters at a polling station in Barmer, Rajasthan, on Friday.

Newspapers in English

Newspapers from China