EU Moves To Regulate Artificial Intelligence
The European Commission has proposed proportionate rules to address risks posed by AI writes Brian McElligott of Mason Hayes & Curran LLP
The EU is leading the global charge to regulate Artificial Intelligence (AI) with the recent publication of its first AI Regulation. Critics claim this is a retrograde step, but the Commission predicts its AI project will set the highest standard worldwide and will ultimately see the EU lead world AI markets. The new rules will be applied directly in the same way across all member states based on what is claimed to be a future-proof definition of AI, following a risk-based approach.
Who is targeted?
Providers, users, importers and distributors will all be subject to the new rules. Providers will bear the bulk of the burden under this Regulation. Importers, distributors and users will also need to pay close attention to the Regulation, as their obligations will be significant and will also require investment in resources and administration.
The EU is keen that we understand that AI technology itself is not the focus of these new laws. The target is the intended purpose of the AI, i.e. the use for which an AI system is intended by the provider. AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned. These include AI systems or applications that manipulate human behaviour to circumvent users’ free will, such as toys using voice assistance encouraging dangerous behaviour of minors, or systems that allow ‘social scoring’ by governments.
The proposed ban on ‘real-time’ remote biometric identification or facial recognition systems in publicly accessible spaces has attracted some publicity. However, the challenges with deploying these systems in a generalised manner are already well understood under GDPR.
In a sliding scale of risk definition, ‘high-risk’ AI spans uses of AI in recruitment, education, health, judicial proceedings and the public sector. This category will be heavily regulated, and providers in particular will be subject to significant checks and conformity procedures not unlike those that currently regulate the medical device sector.
Another category is ‘limited risk’, like chatbots. When interacting with chatbots, users should be made aware that they are interacting with a machine so they can take an informed decision to continue or step back. The final category consists of systems like AI-enabled video games and spam filters. These are classed as ‘minimal risk’ and will be subject to minimal or no additional regulation.
Member states are mandated to provide supports for SMEs to provide guidance and respond to queries about the implementation of this Regulation. They will also be treated favourably from a costs perspective when applying for conformity assessments of high-risk AI systems.
There is potential for infringements to give rise to maximum fines of up to €30m or up to 6% of the offender’s total worldwide annual turnover. The Commission stresses that its standard graduated response to dealing with infringements will apply, and these significant fines will be a last resort.
The AI regulation needs to be ratified by both the Council and the Parliament, which will take time and will be subject to heavy lobbying. Early indications point to late 2023 for it becoming law. In the meantime, those producing high-risk AI systems have a lot of work to do!
For more information on this and other topics related to Artificial Intelligence, contact a member of the firm’s Intellectual Property or Technology teams.
Brian McElligott is a partner in the Technology team in Mason Hayes & Curran LLP For more information visit MHC.ie/AI