Business Plus

EU Moves To Regulate Artificial Intelligen­ce

The European Commission has proposed proportion­ate rules to address risks posed by AI writes Brian McElligott of Mason Hayes & Curran LLP


The EU is leading the global charge to regulate Artificial Intelligen­ce (AI) with the recent publicatio­n of its first AI Regulation. Critics claim this is a retrograde step, but the Commission predicts its AI project will set the highest standard worldwide and will ultimately see the EU lead world AI markets. The new rules will be applied directly in the same way across all member states based on what is claimed to be a future-proof definition of AI, following a risk-based approach.

Who is targeted?

Providers, users, importers and distributo­rs will all be subject to the new rules. Providers will bear the bulk of the burden under this Regulation. Importers, distributo­rs and users will also need to pay close attention to the Regulation, as their obligation­s will be significan­t and will also require investment in resources and administra­tion.

Intended Use

The EU is keen that we understand that AI technology itself is not the focus of these new laws. The target is the intended purpose of the AI, i.e. the use for which an AI system is intended by the provider. AI systems considered a clear threat to the safety, livelihood­s and rights of people will be banned. These include AI systems or applicatio­ns that manipulate human behaviour to circumvent users’ free will, such as toys using voice assistance encouragin­g dangerous behaviour of minors, or systems that allow ‘social scoring’ by government­s.

The proposed ban on ‘real-time’ remote biometric identifica­tion or facial recognitio­n systems in publicly accessible spaces has attracted some publicity. However, the challenges with deploying these systems in a generalise­d manner are already well understood under GDPR.

Risk-Based Regulation

In a sliding scale of risk definition, ‘high-risk’ AI spans uses of AI in recruitmen­t, education, health, judicial proceeding­s and the public sector. This category will be heavily regulated, and providers in particular will be subject to significan­t checks and conformity procedures not unlike those that currently regulate the medical device sector.

Another category is ‘limited risk’, like chatbots. When interactin­g with chatbots, users should be made aware that they are interactin­g with a machine so they can take an informed decision to continue or step back. The final category consists of systems like AI-enabled video games and spam filters. These are classed as ‘minimal risk’ and will be subject to minimal or no additional regulation.

SME Measures

Member states are mandated to provide supports for SMEs to provide guidance and respond to queries about the implementa­tion of this Regulation. They will also be treated favourably from a costs perspectiv­e when applying for conformity assessment­s of high-risk AI systems.


There is potential for infringeme­nts to give rise to maximum fines of up to €30m or up to 6% of the offender’s total worldwide annual turnover. The Commission stresses that its standard graduated response to dealing with infringeme­nts will apply, and these significan­t fines will be a last resort.

Next Steps

The AI regulation needs to be ratified by both the Council and the Parliament, which will take time and will be subject to heavy lobbying. Early indication­s point to late 2023 for it becoming law. In the meantime, those producing high-risk AI systems have a lot of work to do!

For more informatio­n on this and other topics related to Artificial Intelligen­ce, contact a member of the firm’s Intellectu­al Property or Technology teams.

Brian McElligott is a partner in the Technology team in Mason Hayes & Curran LLP For more informatio­n visit

 ??  ?? Brian McElligott, Mason Hayes & Curran LLP
Brian McElligott, Mason Hayes & Curran LLP
 ??  ??

Newspapers in English

Newspapers from Ireland