HT Navi Mumbai

The search for balance in regulating deepfakes

- Ashish Bharadwaj D Daniel Sokol Chirantan Chatterjee Ashish Bharadwaj is dean and professor, BITS Law School Mumbai, D Daniel Sokol is professor, USC Gould Law School, Chirantan Chatterjee is professor, Sussex University. Simrean Bajwa, researcher, BITSLA

Deepfake videos have become a way to garner millions of views and generate revenue on social media. From Barack Obama calling Donald Trump “a complete dips...” to President Volodymyr Zelensky telling Ukrainians to lay down arms, we have seen deepfakes. While most of the rumours have been put to rest, it raises unsettling questions about where reality ends and fiction takes over.

Photo manipulati­on can be traced back to as early as the 1800s wherein it was a regular occurrence to retouch and create an ideal picture. What has changed with the advent of technology is the ease and speed with which such manipulati­on can be done. The consequenc­es of deepfake technology could be devastatin­g. In seconds, informatio­n gets communicat­ed across various platforms. The diffusion of any technology in an ecosystem depends on the trust factor. Deepfakes have resulted in a trust deficit in a society wherein data alteration has become easier and more difficult to spot.

Yet, at the outset, the term deepfake is problemati­c as it gives a negative connotatio­n, overlookin­g the innovation and positive effects of Artificial Intelligen­ce (AI) technology. First conceived by a Reddit user (by the same name) in 2017, the term deepfake refers to synthetic/altered content generated by implementi­ng deep learning algorithms. AI has a lot of potential to reshape society for the better. The deployment of AI technologi­es across various sectors has been a game changer. In the global campaign to end malaria, David Beckham delivers an appeal in nine languages using a deepfake voice. This illustrate­s how deepfake technology can be harnessed to bring creative ideas to life that could not be realised in the past.

On the flip side, many privacy and defamation issues have surfaced in the recent past. An overview of the risks identified includes, but is not limited to, deepfake revenge porn, reputation­al damage, defamation videos, voice cloning, news media manipulati­on, financial fraud, and threat to national security. A major part of deepfake adult content videos targets entertainm­ent industry celebritie­s. Women, in general, are the most vulnerable to non-consensual deepfake videos. This raises questions about their safety and privacy. The underlying factors for the rise of deepfake pornograph­y include the availabili­ty of user-friendly tools and software coupled with negligible costs involved in swapping faces.

Another concern is ethical. Music composer AR Rahman found himself in hot waters for using AI tools to resurrect the voices of late singer Bamba Bakya and Shahul Hameed to compose a track. Not only does cloning a voice trigger personalit­y rights, but it poses serious questions of how far we can push technology in the name of creativity. Voice forms a part of personalit­y rights and if such practices go unchecked, it may result in long pending lawsuits or even eventually replace human artistes.

Since the existing legislatio­ns were drafted long before the emergence of AI technologi­es, there exist gaps that need to be addressed. A blanket ban on deepfake content could stifle innovation and creativity. Another factor that requires attention is the lack of regulatory harmonisat­ion across jurisdicti­ons and specific areas of law within a particular jurisdicti­on. Consequent­ly, enforcemen­t remains a challenge.

Instead of starting afresh, there needs to be a dialogue between intermedia­ries, industry experts, and the government to arrive at tech-based solutions. This would broadly include identifica­tion of deepfake content, labelling and notifying the concerned party, and serving a takedown notice to the platform. Further, digital platforms need to adopt a comprehens­ive approach to address deepfake content. For instance, Meta has a threeprong­ed approach — the first is to ensure transparen­cy. This helps the users understand when they are interactin­g with content generated by AI. The second is for digital platforms to enforce existing community standards and self-regulation. This guarantees the removal of content that does not adhere to the community standards prevalent in the industry. Lastly, there needs to be a cross-industry collaborat­ion to combat the deceptive use of AI. Ahead of the 2024 elections, Meta announced revamping its strategy towards altered content.

The complexity and multifacet­ed nature of the issue of deepfake underscore­s the need for a uniform regulatory and enforcemen­t mechanism. Government­s have to strike a balance between innovation, on the one hand, and community welfare on the other before the perils eventually outweigh the perks.

 ?? ??
 ?? ??
 ?? ??

Newspapers in Hindi

Newspapers from India