Different approaches to AI regulation
Amid the global movement towards regulating AI systems, India’s response would be crucial, with the nation currently catering to one of the largest consumer bases and labour forces for technology companies. India’s path must align with its SDGs while also
The Artificial Intelligence (AI) space has seen certain developments crucial to its regulation in recent years — the United Nations’s Resolution on Artificial Intelligence, the AI Act by the European Parliament, laws introduced on AI in the U.K. and China and the launch of the AI mission in India. These efforts to formalise AI regulations at the global level will be critical to various sectors of governance in all other countries.
With the passing of the United Nations Resolution on Artificial Intelligence, the need and associated discourse on the regulation of AI has entered a new phase. A global acknowledgement of the risks associated with AI systems and the urgent need to promote responsible use was at the centre of the adopted resolution. It was recognised that unethical and improper use of AI systems would impede the achievement of the 2030 Sustainable Development Goals (SDGs), weakening the ongoing efforts across all three dimensions — social, environmental, and economic. Another controversial aspect mentioned in the UN resolution has been the plausible adverse impact of AI on the workforce. It would be imperative, especially for developing and least developed countries, to devise a response as the labour market in such countries is increasingly vulnerable to the use of such systems. In addition to its workforce, the impact on small and medium entrepreneurs also needs to be ascertained. Thus, being the first of its kind, the Resolution has shed light on the future implications of AI systems and the urgent need to adopt collaborative action.
The EU’s approach
The EU recently passed the AI Act, the foremost law establishing rules and regulations governing AI systems. With its riskbased approach, the Act categorises systems into four categories, namely unacceptable, high, limited, and minimal risks, prescribing guidelines for each. The Act prescribes an absolute ban on applications that risk citizens’ rights, including manipulation of human behaviour, emotion recognition, mass surveillance etc. While the Act allows exemptions to banned applications when it is pertinent to law enforcement, it limits the deployment by asking for prior judicial/administrative authorisation in such cases.
The landmark legislation highlights two important considerations — acknowledging the compliance burden placed on business enterprises, and startups, and regulating the muchdeliberated Generative AI systems such as ChatGPT. These two factors warrant the immediate attention of policymakers, given their disruptive potential and the challenges of keeping pace with such evolving systems.
China’s stand on AI
Identifying risks is evident in the approach adopted by China, which focuses on prompting AI tools and innovation with safeguards against any future harm to the nation’s social and economic goals.
The country released, in phases, a regulatory framework addressing the following three issues — content moderation, which includes identification of content generated through any AI system; personal data protection, with a specific focus on the need to procure users’ consent before accessing and processing their data; and algorithmic governance, with a focus on security and ethics while developing and running algorithms over any gathered dataset.
The U.K.’s framework
The U.K., on the other hand, has adopted a principled and contextbased approach in its ongoing efforts to regulate AI systems. The approach requires mandatory consultations with regulatory bodies, expanding its technical knowhow and expertise in better regulating complex technologies while bridging regulatory gaps, if any. The U.K. has thus, resorted to a decentralised and more soft law approach rather than opting to regulate AI systems through stringent legal rules. This is in striking contrast to the EU approach.
India’s position
Amid the global movement towards regulating AI systems, India’s response would be crucial, with the nation currently catering to one of the largest consumer bases and labour forces for technology companies. India will be home to over 10,000 deep tech startups by 2030. In this direction, a ₹10,300 crore allocation was approved for the India AI mission to further its AI ecosystem through enhanced publicprivate partnerships and promote the startup ecosystem. Amongst other initiatives, the allocation would be used to deploy 10,000 Graphic Processing Units, Large MultiModels (LMMs) and other AIbased research collaboration and efficient and innovative projects.
With its economy expanding, India’s response must align with its commitment towards the SDGs while also ensuring that economic growth is maintained. This would require the judicious use of AI systems to offer solutions that could further the innovation while mitigating its risks. A gradual phaseled approach appears more suitable for India’s efforts towards a fair and inclusive AI system.
The author is the Vice Chancellor, National Law University Delhi. Inputs from Priyanshi, Academic Fellow, NLU Delhi. Views are personal.