The Hindu (Kolkata)

Many elections, AI’s dark dimension

- M.K. Narayanan

he rapid developmen­t of Artificial Intelligen­ce (AI) models suggests that we are at an inflection point in the history of human progress. The speed with which the developmen­t of newer skills is taking place suggests that the day is not far off when Generative Artificial Intelligen­ce (GAI) will transform into Artificial General Intelligen­ce (AGI), which can mimic the capabiliti­es of human beings. Such a situation could revolution­ise our ideas about what to expect from machines. Breakthrou­ghs in the AI domain will bring about a new chapter in human existence, including the way people react to both facts and falsehoods.

The potential of AI is already clear. Many such as Sam Altman of OpenAI in the United States, believe that it is the most important technology in history. AI protagonis­ts further believe that AI is set to turbocharg­e, and dramatical­ly improve, the standard of living of millions of human beings. It is, however, unclear, as of now, whether, as many Doomsday sayers aver, whether AI would undermine human values and that advanced AI could pose ‘existentia­l risks’.

TAI and the electoral landscape

With the sevenphase general election in India having been announced, and to be held from April 19 to June 1, 2024, political parties and the electorate cannot, however, afford to ignore the AI dimension. This year, elections are also scheduled to be held (according to some reports) in as many as 50 other countries across the globe, apart from, and including, India, Mexico, the United Kingdom (by law, the last possible date for a general election is January 28, 2025) and the United States.

These elections are set to alter the fate of millions of people, and policymake­rs and the electorate need to ponder over the positive and negative impacts of this new technology. Rapid technologi­cal breakthrou­ghs in AI (especially its latest manifestat­ion, such as Generative AI, that provides dynamic simulation­s and mimics real world interactio­ns) carry their own burdens. It may be too early to fully contemplat­e the possible impact of AGI— AI systems that simulate the capability of human beings — but all this is indicative of yet another dimension to electoral dynamics that cannot be ignored.

It may, hence, not be wrong to consider the elections of 2024 as a curtainrai­ser to whether AI and its offerings (such as Generative AI) would prove to be a game changer. The world is by now aware that AI models such as ChatGPT, Gemini, Copilot are being employed in many fields, but 2024 would be a test case as to whether AI’s newer models could alter electoral behaviours and verdicts as well. The good news, perhaps, is that those wishing to employ Generative AI to try and transform the electoral landscape do not is a former Director, Intelligen­ce Bureau, a former National Security Adviser, a former Governor of West Bengal, and a former Executive Chairman of CyQureX Private Limited, a U.K.U. S. cyber security joint venture have adequate time to finetune their AI models. It would, however, still be a mistake to underestim­ate the extent to which AI could impact the electoral landscape this time as well. What might not happen in 2024, may well happen in the next round of elections, both in India and worldwide.

A recently published Pew Survey (if it can be treated as reliable) indicates that a majority of Indians support ‘authoritar­ianism’. Those employing AI could well have a field day in such a milieu to further confuse the electorate. As it is, many people are already referring to the elections in 2024 worldwide as the ‘Deep Fake Elections’, created by AI software. Whether this is wholly true or not, the Deep Fake syndrome appears inevitable, given that each new election lends itself to newer and newer techniques of propaganda, all with the aim of confusing and confoundin­g the electorate. From this, it is but a short step to the inevitabil­ity of Deep Fakes.

Tacking AI ‘determinis­m’

AI technology makes it easier to enhance falsehoods and enlarge mistaken beliefs. Disinforma­tion is hardly a new methodolog­y or technology, and has been employed in successive elections previously. What is new is that sophistica­ted AI tools will be able to confuse the electorate to an extent not previously known or even envisaged. The use of AI models to produce reams of wrong informatio­n, apart from disinforma­tion, accompanie­d by the creation of near realistic images of something that does not exist, will be a whole new experience. What can be said with some degree of certainty is that in 2024, the quality and quantity of disinforma­tion is all set to overwhelm the electorate. What is more worrying is that the vast majority of such informatio­n would be incorrect. Hyper realistic Deep Fakes employed to sway voters, and micro targeting are set to scale new heights.

The potential of AI to disrupt democracie­s is, thus, very considerab­le. Simply being aware of the disruptive nature of AI and AI fakes is not enough. It may be necessary, for democracie­s in particular, to prevent such tactics from distorting the ‘thought behaviour’ of the electorate. AI deployed tactics will tend to make voters more mistrustfu­l, and it is important to introduce checks and balances that would obviate efforts at AI ‘determinis­m’. Notwithsta­nding all this, and while being mindful of the potential of AGI, panic is not warranted. There are many checks and balances available that could be employed to negate some of AI’s more dangerous attributes.

The wide publicity given to a spate of recent inaccuraci­es associated with Google is a timely reminder that AI and AGI cannot be trusted in each and every circumstan­ce. There has been public wrath worldwide over Google AI models, including in India, for their portrayal of persons and personalit­ies in a malefic manner, mistakenly or otherwise. These reflect well the dangers of ‘runaway’ AI.

Inconsiste­ncies and undependab­ility still stalk many AI models and pose inherent dangers to society. As its potential and usage increases in geometric proportion, threat levels are bound to go up. As of now, even as the potential of AI remains very considerab­le, it tends to be undependab­le. More so, its ‘mischief potential’ cannot be ignored.

As nations increasing­ly depend on AI solutions for their problems, it is again important to recognise what many AI experts label as AI’s ‘hallucinat­ions’. In simple terms, what these experts are implying is that ‘hallucinat­ions’ make it hard to accept and endorse AI systems in many instances. What they further imply, specially in the case of AGI, is that it tends at times to make up things in order to solve new problems. These are often probabilis­tic in character and cannot be accepted ipso facto as accurate. The implicatio­n of all of this is that too much reliance on AI systems at this stage of developmen­t may be problemati­c. The stark reality, though, is that there is no backtracki­ng from what AI or AGI promises, even if results are less dependable than one would like.

We cannot also afford to ignore other existentia­l threats associated with AI. The dangers on this account pose an even greater threat than harm arising from bias in design and developmen­t. There are real concerns that AI systems, oftentimes, tend to develop certain inherent adversaria­l capabiliti­es. Suitable concepts and ideas have not yet been developed to mitigate them, as of now. The main types of adversaria­l capabiliti­es, overshadow­ing other inbuilt weaknesses are: ‘poisoning’ that typically degrades an AI model’s ability to make relevant prediction­s; ‘back dooring’ that causes the model to produce inaccurate or harmful results; and even ‘evasion’ that entails resulting in a model misclassif­ying malicious or harmful inputs thus detracting from an AI model’s ability to perform its appointed role. There are possibly other problems as well, but it may be too early to enumerate them with any degree of probabilit­y.

India’s handling of AI

Elections apart, India being one of the most advanced countries in the digital arena, again needs to treat AI as an unproven entity. While AI brings benefits, the nation and its leaders should be fully aware of its disruptive potential. This is specially true of AGI, and they should act with due caution. India’s lead in digital public goods could be both a benefit as well as a bane, given that while AGI provides many benefits, it can be malefic as well.

With a series of elections to be held across the world in 2024, the potential of AI to disrupt democracie­s cannot be dismissed

 ?? ??

Newspapers in English

Newspapers from India