The Indian Express (Delhi Edition)
HOW TO SPOT AI-POWERED POLITICAL DISINFORMATION IN POLL SEASON
THE first phase of voting for the Lok Sabha elections is on Friday. Over the past few weeks, there has been a deluge of disinformation and manipulated media online.
Two videos of actor Aamir Khan went viral this week. Both were manipulated versions of a promo for Khan’s popular TV show, Satyamev Jayate. In one, Khan appears to be explicitly supporting the Congress party, while in the other, he is seen speaking about nyay (justice) — a key Congress talking point in recent years, and the title of its manifesto (Nyay Patra or ‘Document [for] Justice’)
Recently, actor Ranveer Singh too was a victim of deepfake technology, when a manipulated video of him criticising Prime Minister Narendra Modi on the issues of unemployment and inflation was widely shared. In the original clip, however, Ranveer was actually praising the prime minister.
Here is how these deepfake videos are made — and how you can spot them.
Voice swap technology
itisaar.ai,anaidetectiontooldeveloped in collaboration with IIT Jodhpur, shows that these videos were generated using ‘voice swap’ technology.
As the name suggests, this refers to the process of using an AI algorithm to either alter or mimic an individual’s voice. The technology also allows the creators to change the characteristics of a voice, such asaccent,tone,pitch,andspeechpatterns to make the videos more realistic.
Currently, there are several easy-touse AI voice swap tools available for free. The creator has to simply upload or record the audio sample that she wants to replace, and then customise the settings to make the uploaded sample sound as realistic as possible.
Spotting deepfakes
While it is not easy to spot well-produced deepfakes, here are some tips to keep in mind while scrolling through social media, especially during election time.
Be cautious of audio or video content from unfamiliar sources, especially if it seems controversial or sensational. Verify the authenticity of any suspicious post by cross-referencing with reliable sources, and trustworthy media organisations.
Deepfake audio may exhibit subtle anomalies, such as the voice’s unnatural tenor, slightly robotic speech, and irregular pauses. Listen closely for these telltale signs of manipulated or synthetic speech.
Deepfake audio is often accompanied by manipulated visual content, such as altered video footage. Check both audio and visuals elements for any discrepancies or inconsistencies. For instance, if lips do not move in sync with the speech, the video you are seeing may be manipulated.
Staying updated about day-to-day news and events is key to recognising the risks associated with deepfakes. It is harder to fool people who have general awareness of what is happening around them.
Afew AI detectors, such as Optic’s ‘AI or Not’ are available to be used for free. You can upload any suspicious audio or video onto such detectors, which will tell you the authenticity of any content.