Hindustan Times (Jalandhar)

Address the deepfake problem

Regulators such as the Election Commission must act before this becomes unmanageab­le

- ANANTH PADMANABHA­N Ananth Padmanabha­n is dean, Academic Affairs, Sai University and visiting fellow, Centre for Policy Research The views expressed are personal

War and sex have been pivotal drivers of consumer tech in the past. Satellite navigation, penicillin, microwave ovens and superglue, all trace their origins to battlefiel­d imperative­s. It is also no coincidenc­e that for a couple of decades, the flagship consumer electronic­s show at Las Vegas happened alongside the adult entertainm­ent expo, usually in the same building. The Internet owes a great deal to the United States military for its inception as well as the pornograph­y industry for its rapid diffusion.

More lately, a third influencer has joined these two drivers of consumer tech — politics. Political actors world over have begun adopting new technologi­cal solutions to substantia­lly shape public opinion. Both porn and politics are at the vanguard of consumer demand that drives digital technologi­es.

The Barack Obama campaign of 2008 marked the beginning of this trend, with the Republican opposition coming a cropper against the former’s social media onslaught. Subsequent electoral battles have seen favoured technologi­es of the season emerge, including targeted digital marketing through social media posts and tweets, and constructe­d echo chambers of viral political opinion using personal messaging apps.

The recent video of a Bharatiya Janata Party (BJP) politician speaking in doctored English, and with an accent that may appeal to a certain voter base, has sparked allegation­s of resort to “deepfake” for the first time in Indian politics. This episode forces the question: Will deepfakes become new and shiny tech tools at the disposal of the propaganda industry?

Deepfake videos are a substantia­l advance over clumsy image morphs of the nineties. They are the outcome of using an array of artificial intelligen­ce and deep learning solutions, collective­ly termed generative adversaria­l networks (GANs), to believably mimic the real world, be it images, music, speech or prose. As it happened with Nancy Pelosi recently, the higher the public availabili­ty of video footages of an individual, stronger the possibilit­y of algorithmi­cally generating her fake videos.

There are three major problems with deepfakes that render them particular­ly worrisome. The first problem relates to the compelling narrative created in our minds by the moving image. For sure, from fake news to phishing emails, the world wide web is a crucible of fraud and deception. Yet, deepfake videos trouble us because internally, we place differenti­al levels of trust in what we read and what we view. While the former is an expression of something inside a person’s mind, the latter is an outcome of physical movement. Because we are conscious of the fact that we have many more data points to visually assess and repudiate a fake in the latter scenario, we also place more confidence in our judgment. A fake well done will, therefore, attract much less self-doubt.

The second problem is that refuting deepfake videos becomes far more difficult because of the manner in which GANs operate to create such videos. Even videos and audio clips doctored using much less advanced technologi­es are not easy to refute because of the technical processes of alteration. The problem becomes worse with GANs. These adversaria­l networks deploy the architectu­re of two neural networks pitted against each other. The generator network analyses datasets from the real world and generates new data that appears to belong to these analysed datasets, while the discrimina­tor network evaluates the generated data for authentici­ty. Through multiple cat-and-mouse rounds between the two networks, the generated data attains high levels of authentici­ty, spawning synthetic data that nearly matches real data. By its very design, significan­t data and algorithms that can parse the same are needed to verify the synthetic data. The discerning member of a WhatsApp family group may find her voice of reason lost in such situations.

The fact that human judgment can no longer serve as a first line of defence against this barrage of automatica­lly generated deepfakes also makes it abundantly clear that we are confronted with an ethical choice here. The broad choice is to either sign up for a world where the truth is algorithmi­cally determined, or one where we protect the human element but choose not to make much progress with GANs and their significan­t potential in taking forward the domain of artificial intelligen­ce.

The ethical choice outlined above must at some point translate to regulatory action, a matter that is predominan­tly the preserve of politics. This creates the third problem, the special attraction that political campaigns potentiall­y hold for deepfakes. If political actors simultaneo­usly benefit from the GANwritten rules of truth and falsehood, they may do no better than leave matters to self-regulation. We already saw this with the Internet and Mobile Associatio­n of India’s ineffectiv­e voluntary code to tackle misinforma­tion during the 2019 parliament­ary elections. Deepfakes are more concerning, and India must avoid being formulaic in her response.

When politics drives consumer tech, it is ethically different from the porn industry’s early adoption. As Jonathan Coopersmit­h noted in 1998, the subject matter of the latter comes in the way of publically accepting its endorsemen­t. But with political actors, we run a higher risk of failing to evaluate the adopted technology for its long-term harms. For this additional reason too, independen­t regulators like the Election Commission of India must begin to address the deepfake problem before it becomes an unmanageab­le crisis.

 ?? ISTOCKPHOT­O ?? Due to its sophistica­tion, refuting deepfake videos is not easy
ISTOCKPHOT­O Due to its sophistica­tion, refuting deepfake videos is not easy
 ??  ??

Newspapers in English

Newspapers from India