India: Permission required for launching AI models
New Delhi, India - The Union Ministry of Electronics and Information Technology (Meity) has issued a second advisory to platforms or intermediaries, asking them to seek explicit permission from the Centre before launching Artificial Intelligence (AI) models, under testing, in the country.
The advisory was issued on Friday evening, more than two months after the ministry issued an advisory in December last year to social media platforms, directing them to follow existing IT rules to deal with the issue of deepfakes.
“The use of under-testing / unreliable Artificial Intelligence model(s) /Llm/generative AI, software(s) or algorithm(s) and its availability to the users on Indian Internet must be done so with explicit permission of the Government of India and be deployed only after appropriately labelling the possible and inherent fallibility or unreliability of the output generated. Further, ‘consent popup’ mechanism may be used to explicitly inform the users about the possible and inherent fallibility or unreliability of the output generated.”
The advisory added that it recently came to the notice of the ministry that intermediaries or platforms are failing to undertake due-diligence obligations outlined under Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules).
“All intermediaries or platforms (are) to ensure that use of Artificial Intelligence model(s) /Llm/generative AI, software(s) or algorithm(s) on or through its computer resource does not permit its users to host, display, upload, modify, publish, transmit, store, update or share any unlawful content as outlined in the Rule 3(1)(b) of the IT Rules or violate any other provision of the IT Act,” it stated.