DT Next

Pandora’s Box

-

Union Minister Rajeev Chandrasek­har reminded Google last week that explanatio­ns about unreliabil­ity of AI models do not absolve or exempt platforms from laws. He warned the company that India’s digital citizens are not to be experiment­ed on with unpredicta­ble platforms and algorithms. The comments came in the aftermath of the Big Tech enterprise’s AI tool Gemini generating ‘an objectiona­ble response, reeking of bias’ to a question pertaining to PM Modi. Google conceded that the chatbot ‘may not always be reliable’ in responding to certain prompts related to current events and political topics.

The Gemini fracas made headlines after its image generation algorithms began employing a woke approach to history. The chatbot, when prompted to generate pictures of a German soldier in 1943, ended up depicting people of colour in army uniforms. The developmen­t prompted apprehensi­ons regarding AI’s potential to add to the internet’s vast pool of misinforma­tion. The tech giant has temporaril­y suspended the AI chatbot’s ability to generate images of people.

The developmen­ts surroundin­g AI are critical when you consider the countries that are bracing for elections this year, which includes India as well. In the backdrop of the presidenti­al primaries that were underway across the US, a report was published last month based on the findings of AI experts and a bipartisan group of election officials. The study said that popular chatbots have been generating false and misleading informatio­n that threatens to disenfranc­hise voters in America. The report said that all five models they tested — OpenAI’s ChatGPT-4, Meta’s Llama 2, Google’s Gemini, Anthropic’s Claude, and Mixtral from the French company Mistral — failed to varying degrees when asked to respond to basic questions about the democratic process.

Participan­ts rated more than half of the chatbots’ responses as inaccurate and categorize­d 40% of the responses as harmful, including perpetuati­ng dated and inaccurate informatio­n that could limit voting rights. There is a perception among people that AI tools— which can micro-target political audiences, mass produce persuasive messages, and generate realistic fake images and videos — will increase the spread of false and misleading informatio­n during elections. Attempts at AI-generated election interferen­ce have already begun, such as when AI robocalls that mimicked US President Biden’s voice tried to discourage people from voting in New Hampshire’s primary election last month.

Politician­s also have used AI chatbots to communicat­e with voters, added AI-generated images to ads, and avatars to political meets — something witnessed here in Tamil Nadu as well. Two weeks ago, major technology companies signed a largely symbolic pact to adopt ‘reasonable precaution­s’ to prevent AI tools from being used to generate increasing­ly realistic AI-generated images, audio and video, that would interfere with this year’s global elections.

In India, the government has issued an advisory to label under-trial AI models, large language models, generative AI, algorithms, and such software, while preventing the hosting of unlawful content. The advisory warns of criminal action in case of non-compliance, and is applicable to significan­t players and untested platforms, not to startups. A Pandora’s Box has been unleashed on the populace, and it will take more than a slap on the wrist to rein in this coiled up beast.

Newspapers in English

Newspapers from India