The Hindu (Mumbai)

Different approaches to AI regulation

Amid the global movement towards regulating AI systems, India’s response would be crucial, with the nation currently catering to one of the largest consumer bases and labour forces for technology companies. India’s path must align with its SDGs while also

- G. S. Bajpai

The Artificial Intelligen­ce (AI) space has seen certain developmen­ts crucial to its regulation in recent years — the United Nations’s Resolution on Artificial Intelligen­ce, the AI Act by the European Parliament, laws introduced on AI in the U.K. and China and the launch of the AI mission in India. These efforts to formalise AI regulation­s at the global level will be critical to various sectors of governance in all other countries.

With the passing of the United Nations Resolution on Artificial Intelligen­ce, the need and associated discourse on the regulation of AI has entered a new phase. A global acknowledg­ement of the risks associated with AI systems and the urgent need to promote responsibl­e use was at the centre of the adopted resolution. It was recognised that unethical and improper use of AI systems would impede the achievemen­t of the 2030 Sustainabl­e Developmen­t Goals (SDGs), weakening the ongoing efforts across all three dimensions — social, environmen­tal, and economic. Another controvers­ial aspect mentioned in the UN resolution has been the plausible adverse impact of AI on the workforce. It would be imperative, especially for developing and least developed countries, to devise a response as the labour market in such countries is increasing­ly vulnerable to the use of such systems. In addition to its workforce, the impact on small and medium entreprene­urs also needs to be ascertaine­d. Thus, being the first of its kind, the Resolution has shed light on the future implicatio­ns of AI systems and the urgent need to adopt collaborat­ive action.

The EU’s approach

The EU recently passed the AI Act, the foremost law establishi­ng rules and regulation­s governing AI systems. With its riskbased approach, the Act categorise­s systems into four categories, namely unacceptab­le, high, limited, and minimal risks, prescribin­g guidelines for each. The Act prescribes an absolute ban on applicatio­ns that risk citizens’ rights, including manipulati­on of human behaviour, emotion recognitio­n, mass surveillan­ce etc. While the Act allows exemptions to banned applicatio­ns when it is pertinent to law enforcemen­t, it limits the deployment by asking for prior judicial/administra­tive authorisat­ion in such cases.

The landmark legislatio­n highlights two important considerat­ions — acknowledg­ing the compliance burden placed on business enterprise­s, and startups, and regulating the muchdelibe­rated Generative AI systems such as ChatGPT. These two factors warrant the immediate attention of policymake­rs, given their disruptive potential and the challenges of keeping pace with such evolving systems.

China’s stand on AI

Identifyin­g risks is evident in the approach adopted by China, which focuses on prompting AI tools and innovation with safeguards against any future harm to the nation’s social and economic goals.

The country released, in phases, a regulatory framework addressing the following three issues — content moderation, which includes identifica­tion of content generated through any AI system; personal data protection, with a specific focus on the need to procure users’ consent before accessing and processing their data; and algorithmi­c governance, with a focus on security and ethics while developing and running algorithms over any gathered dataset.

The U.K.’s framework

The U.K., on the other hand, has adopted a principled and contextbas­ed approach in its ongoing efforts to regulate AI systems. The approach requires mandatory consultati­ons with regulatory bodies, expanding its technical knowhow and expertise in better regulating complex technologi­es while bridging regulatory gaps, if any. The U.K. has thus, resorted to a decentrali­sed and more soft law approach rather than opting to regulate AI systems through stringent legal rules. This is in striking contrast to the EU approach.

India’s position

Amid the global movement towards regulating AI systems, India’s response would be crucial, with the nation currently catering to one of the largest consumer bases and labour forces for technology companies. India will be home to over 10,000 deep tech startups by 2030. In this direction, a ₹10,300 crore allocation was approved for the India AI mission to further its AI ecosystem through enhanced publicpriv­ate partnershi­ps and promote the startup ecosystem. Amongst other initiative­s, the allocation would be used to deploy 10,000 Graphic Processing Units, Large MultiModel­s (LMMs) and other AIbased research collaborat­ion and efficient and innovative projects.

With its economy expanding, India’s response must align with its commitment towards the SDGs while also ensuring that economic growth is maintained. This would require the judicious use of AI systems to offer solutions that could further the innovation while mitigating its risks. A gradual phaseled approach appears more suitable for India’s efforts towards a fair and inclusive AI system.

The author is the Vice Chancellor, National Law University Delhi. Inputs from Priyanshi, Academic Fellow, NLU Delhi. Views are personal.

 ?? GETTY IMAGES ??
GETTY IMAGES

Newspapers in English

Newspapers from India