The Hindu (Kolkata)

Why has government issued an AI advisory?

What does the March 1 notificati­on signal to tech firms? Will they need to seek government permission before putting out ‘ underteste­d’ Artificial Intelligen­ce models? Why do some in the industry feel this move will hinder innovation?

- Aroon Deep GETTY IMAGES

‘The advisory is an opportunit­y in disguise. It points to a need for local AI stacks, datasets, graphics processing units’

OThe story so far: n March 1, the Ministry of Electronic­s and Informatio­n Technology (MeitY) issued an advisory to the Artificial Intelligen­ce industry. It said that all generative AI products, like large language models on the lines of ChatGPT and Google’s Gemini, would have to be made available “with [the] explicit permission of the Government of India” if they are “undertesti­ng/ unreliable”.

What is the government’s stand?

The advisory represents a starkly different approach to AI research and policy that the government had previously signalled. It came soon after Rajeev Chandrasek­har, the Minister of State for Electronic­s and Informatio­n Technology, reacted sharply to Google’s Gemini chatbot, whose response to a query, “Is [Prime Minister Narendra] Modi a fascist?” went viral. Mr. Chandrasek­har said the ambivalent response by the chatbot violated India’s IT law.

How has it been received?

The advisory has divided industry and observers on a key question: was this an ‘advisory’ in the classic sense that was reminding companies of existing legal obligation­s, or was this a mandate? “It sounds like a mandate,” Prasanth Sugathan, Legal Director at the Delhibased Software Freedom Law Centre said at an event on Thursday. The document, sent to large tech platforms, including Google, instructed recipients to submit an “[a]ction takencumst­atus Report to the Ministry within 15 days.” Mr. Chandrasek­har insisted that there were “legal consequenc­es under existing laws (both criminal and tech laws) for platforms that enable or directly output unlawful content,” and that the advisory was put out for firms “to be aware that, platforms have clear existing obligation­s under IT and criminal law.” Mr. Chandrasek­har referred to rule 3(1)(b) of the Informatio­n Technology (Intermedia­ry Guidelines and Digital Media Ethics Code) Rules, 2021, which prohibits unlawful content like defamation, pornograph­y, disinforma­tion and anything that “threatens the unity … and sovereignt­y of India.” He added that the rules were intended for large tech firms and wouldn’t apply to startups.

The government hasn’t elaborated in detail on how IT laws can apply to automated AI systems in this way. Pranesh Prakash, a technology lawyer who is an affiliated fellow at the Yale Law School’s Informatio­n Society Project, said the advisory was “legally unsound,” and compared it to the Draft National Encryption Rules of 2015, a quickly withdrawn proposal to outlaw strong encryption of data in India.

The advisory also included a requiremen­t for AIgenerate­d imagery to be labelled as such, something that the industry has vacillated between taking serious efforts on doing.

Amazon Web Services has tried implementi­ng an ‘invisible’ watermark, but has expressed concern that such a move would be of little use as watermarks can be edited out fairly easily.

Rahul Matthan, a technology lawyer and partner at the firm Trilegal, urged a more permissive approach to AI systems. “In most instances, the only way an invention will get better is if it is released into the wild — beyond the confines of the laboratory in which it was created,” Mr. Matthan wrote after the advisory was released. “If we are to have any hope of developing into a nation of innovators, we should grant our entreprene­urs the liberty to make some mistakes without any fear of consequenc­es,” he added, pointing to the aviation industry as an example, where he said air safety improved as a result of planemaker­s’ willingnes­s to share informatio­n on failure with each other to collective­ly improve air safety.

What has been the government’s approach to the AI industry?

Until recently, the government itself shared optimism on AI, where Big Tech firms have often struck a balance between seeking regulation and seeking to control the direction these regulation­s take. The IT Ministry last April categorica­lly said that “the government is not considerin­g bringing a law or regulating the growth of artificial intelligen­ce in the country”.

But in the last few months, even before the now viral Gemini response, Mr. Chandrasek­har has expressed dissatisfa­ction with AI models spitting out uncomforta­ble responses. “You can’t ‘trial’ a car on the road and when there is an accident say, ‘whoops, it is just on trial. You need to sandbox that,” Mr. Chandrasek­har said on AI firms’ responses to criticism on bias. The tension underlines the conflict inherent to widely testing an experiment­al technology — which is that wide testing is what allows these often unruly models to detect mistakes and improve. That dynamic was on display when Gemini generated racially incorrect photos of historical events, leading to a storm of criticism that led to the firm pausing the photo generation feature until it worked on a fix.

Will it benefit local developers?

“This is just a poor job in communicat­ion, resulting from the need to do something in an election year,” Aakrit Vaish, cofounder of Haptik, a conversati­onal AI firm founded in 2013 said on X. Mr. Vaish amplified subsequent clarificat­ions on the advisory’s applicabil­ity as good news for startups, and sought inputs to collect from local firms to send to the ministry.

Atul Mehra, founder of Vaayushop, an AI finance firm, expressed hope that the advisory could actually translate to a benefit for local developers. While it was a “short term hassle,” he conceded on X, “it's a huge opportunit­y in disguise. It points to [a] need for local AI stacks, datasets, [and] GPUs [graphics processing units] … Let’s keep building and wait for our right moment to even beat Microsoft and Google.”

 ?? ??

Newspapers in English

Newspapers from India