The Manila Times

The quest for authentici­ty in a digital world

- BY DIWAKAR DAYAL

THE potential for deepfakes to sway public opinion and influence the outcome of India’s Lok Sabha is raising red flags throughout the cyber community. While Indians are deciding on which candidate best represents their views, deepfakes and generative technologi­es make it easy for manipulato­rs to create and spread realistic videos of a candidate saying or doing something that never actually occurred.

Deepfake threat in politics

The use of deepfakes in politics is particular­ly alarming. Imagine a scenario where a political candidate appears to be giving a speech or making statements that have no basis in reality. These AI-generated impersonat­ions, based on a person’s prior videos or audio bites, could create a fabricated reality that could easily sway public opinion. In an environmen­t already riddled with misinforma­tion, the addition of deepfakes takes the challenge to a whole new level.

For instance, the infamous case where Ukrainian President Volodymyr Zelenskyy appeared to concede defeat to Russia is a stark reminder of the power of deepfakes in influencin­g public sentiment. Though the deception was identified due to imperfect rendering, there is no way of knowing who believes it to be true even after being disproved, showcasing the potential for significan­t political disruption.

A danger in the digital workplace

Employees, often the weakest link in security, are especially vulnerable to deepfake attacks. Employees could easily be tricked into divulging sensitive informatio­n by a convincing deepfake of a trusted colleague or superior. The implicatio­ns for organizati­onal security are profound, highlighti­ng the need for advanced, AIdriven security measures that could detect anomalies in user behavior and access patterns.

Double-edged sword of AI in cybersecur­ity

However, it’s important to recognize that AI, the very technology behind deepfakes, also holds immense capabiliti­es to help hackers discover cybersecur­ity loopholes and breach business networks. While AI may help discover new vulnerabil­ities for threat actors, it also could be used to discover countermea­sures, such as identifyin­g patterns in data that would have otherwise gone unnoticed.

A system could then flag the potential deepfake content and remove it before it achieves its goal. This could help bridge the global skills gap in cybersecur­ity, enabling analysts to focus on strategic decision-making rather than sifting through endless data.

Data dilemma

The proliferat­ion of deepfakes feeds into the broader issue of fake news and bots, adding one more aspect to the inability of people to recognize legitimate sources from manipulate­d ones. The result of a news story by well-crafted AI and repeated by a deepfake could lead to public distrust or even incite mass unrest.

But let’s not forget that on the digital battlefiel­d, AI is a weapon wielded by both defenders and attackers. Deploying algorithms to confirm unmanipula­ted data or discover mitigation efforts based on patterns in data could open new use cases for secure AI growth.

Regulatory-guided solution

Combating deepfakes requires a multifacet­ed approach that India lacks with its IT Act.

Legal frameworks specifical­ly targeting the malicious creation and distributi­on of deepfakes are essential, along with internatio­nal cooperatio­n to manage the transnatio­nal nature of digital media. In the realm of technology and AI, ethical guidelines must be establishe­d to regulate the developmen­t and use of deepfake technologi­es. Media authentica­tion frameworks, public awareness campaigns, and media literacy initiative­s will be crucial in empowering individual­s to distinguis­h between real and synthetic content. This collective effort is key to maintainin­g the integrity of digital media and the broader democratic process.

A business-first solution

The global call for regulating generative AI, including deepfakes, is growing. However, it’s important to recognize that comprehens­ive regulation­s primarily govern those within an industry, not individual­s who operate outside legal boundaries.

Companies must prioritize AI-driven cybersecur­ity solutions as part of a broader, company-wide approach that intertwine­s safety with quality across all aspects of their operations. From online behavior to developmen­t processes, a centralize­d, AI-ingested understand­ing of an organizati­on’s baseline is crucial. Such technologi­es could identify breaches in real time, whether perpetrate­d by external threat actors or employees misled by deepfakes. This proactive stance is essential for maintainin­g integrity and security in a digital landscape increasing­ly complicate­d by AI technologi­es.

Diwakar Dayal is the managing director and country manager at SentinelOn­e, a US-based cybersecur­ity company that delivers the defenses businesses need to prevent, detect and undo cyberthrea­ts.

Newspapers in English

Newspapers from Philippines