Seeing is not always believing
It was only a matter of time before technologies such as artificial intelligence and machine learning would be called out on account of their questionable usage. The latest controversy surrounding the use of such innovations came in the aftermath of former Karnataka Chief Minister and senior BJP leader Sadananda Gowda filing a complaint about a morphed viral video of an obscene nature that allegedly features an individual bearing his likeness. Refuting that it was him who was featured in the video, Gowda said that adversaries with vested interests who were keen on maligning his reputation were spreading such content. One of the terms that Gowda happened to name-drop was a ‘deepfake’ video. The portmanteau word which is a combination of deep learning and fake implies a synthetic media in which a person in an existing image or video is replaced by someone else’s likeness, so that it becomes indistinguishable from the original. Gowda’s allegation comes in the wake of six ministers from Karnataka approaching a Bengaluru court almost six months ago, pleading it to impose a cease and desist order on the spread of certain fake videos that features a likeness of the ministers. Initially, deepfake tech was employed by enthusiasts to create entertaining videos that went viral, owing to the uncanny manner in which celebrities could be morphed into uttering movie dialogues made famous by other individuals, using video and audio wizardry. In the US, a stand-up comedian worked with a popular media company three years ago, to create a deepfake featuring the funny man’s voice and former US President Barack Obama’s likeness. Ironically, the video was meant to be a public service announcement warning citizens of the danger of deepfake videos. In India, deepfakes were employed by BJP in the 2020 Delhi Legislative Assembly election campaign. The party used this to lip-sync its leader Manoj Tiwari’s English speech into Haryanvi to target voters from the region. While the candidate did not speak the voters’ language, all it took was the deepfake tech and a dubbing artiste to provide a voiceover to Tiwari’s English speech that was made to look like a Haryanvi speech, while including criticisms of his opponent that was not part of Tiwari’s original speech. Apart from posing a threat to individual privacy, deepfake video can be used to influence electorates, incite unrest through propaganda and spread misinformation. This is why many nations have now enacted strict legislations pertaining to this. America’s Deepfakes Accountability Act was passed in 2019, and it mandates that deepfake videos/images are watermarked for identification. Similarly, China has warned that synthetically altered videos must notify users that the said content is fake, in the absence of which, the creators could be penalised. In the UK, producers of deepfake content can be prosecuted for harassment while in Canada, citizens have been empowered with multiple options to battle deepfake targeting. In India, we do not have an explicit law banning deepfakes. However, there are sections 67 and 67A of the IT Act 2000, that penalises publication of sexually explicit material in an electronic form. Section 500 of the IPC 1860 criminalises defamation as well. However, experts argue that by themselves, these laws are toothless when it comes to countering the menace of deepfakes. One solution would be mandating the use of digital signatures to authenticate the origin of each video. Researchers have also called for government administrations to adopt blockchain technology to monitor deepfake videos. Blockchain is a foolproof system that can root out even minuscule manipulations, thanks to decentralised tech that allows anyone to verify the originality of the data by comparing its distinct, non-invertible key. Once passed, India’s Personal Data Protection Bill 2019, will also prohibit the usage and circulation of such videos. And it’s one more reason why the government must proceed with implementing such legislation on a war footing.