EU’s groundbreaking artificial intelligence regulation places marker for continued growth
On April 21, the European Commission published the long-awaited proposal for a regulation on artificial intelligence (“AI Regulation”).
The proposal introduces a first-of-its-kind, comprehensive, harmonized, regulatory framework for artificial intelligence. For Israeli companies innovating AI, this serves as a major step towards providing some legal certainty that’s needed to facilitate further investment.
It will also affect Israeli companies that are using AI and are looking to do business with customers in the EU, as the new rules will place direct regulatory burdens on certain classifications of AI technologies.
The AI regulation will affect providers, users, distributors, importers, or resellers of AI that are either: placing AI systems on the market, putting them into service, or making use of them within the EU.
Israeli companies developing, selling or using AI systems which have a nexus to Europe will be governed by this regulation (even if the systems themselves are located in Israel, or elsewhere).
The statute will introduce a tiering of regulatory requirements, with higher levels of control applying to different AI systems, depending on the risk associated with each system.
The most restricted level applies to prohibited AI practices. These are AI applications that the EU has determined to be particularly intrusive and must not be allowed to take place. Such practices include AI used for social scoring, large-scale surveillance (with notable exceptions), adverse behavioral influencing through AI-based dark patterns (subliminal techniques beyond a person’s consciousness) and AI-based micro targeting (exploiting a specific group).
There is no scope to sell AI systems that fall foul of these restrictions in the EU.
The second type of classification relates to high risk AI systems. These are technologies anticipated to present significant risk of harm. These systems are permitted, but only on a restricted basis where specific regulatory controls are in place to ensure safe use. The AI regulation includes a list of ‘high risk’ AI systems (which may be expanded by the European Commission in due course), which covers a wide range of applications, including AI systems deployed in relation to credit scoring; essential public infrastructure; social welfare and justice; medical and other regulated devices; and transportation systems.
If an AI technology falls within these categories, the controls that have to be adopted include:
Transparency to users about the characteristics, capabilities, and limitations of the technology.
Reporting of serious incidents to market surveillance authorities.
Establishment, implementation and documentation of a risk management system to assess, monitor and review risks, both before placing the system for sale and then on an ongoing basis
Ensuring any data sets used to support training, validation and testing of AI are subject to appropriate data governance and management practices to mitigate the risk of bias, discrimination or other harm.
Ensuring effective human oversight over all AI systems, to review outputs and mitigate the risk of bias or other potential for harm.
Maintaining complete and up-to-date technical documentation for users.
Registration in an EU database on high risk AI systems.
The third classification is for lower-risk AI systems. These are AI systems that fall outside the scope of those identified as ‘high risk’ and are not deployed for a prohibited practice. These systems are subject to a transparency regime.
Regulatory oversight of the new regime is achieved through the establishment of supervisory and enforcement authorities in each EU member state and the European Artificial Intelligence Board. These bodies are responsible for conducting market surveillance and control of AI systems and enforcement that may include the issuing of fines under a regime similar to that under the GDPR privacy regime – in this case up to €30m or (if higher) 2%–6% of global annual turnover.
Israeli companies that provide AI into the EU market will need to be familiar with the new regime and prepared to cooperate with EU based customers and regulators to support compliance, including by providing full access to training, validation and testing datasets etc.
Infringements could be costly even if all sales activity is undertaken offshore from Israel. The introduction of a new, clear and likely robustly-enforced regulatory scheme in one of the world’s largest trading blocs will undoubtedly create a paradigm shift in responsibility for the AI ecosystem – at once providing legal certainty and stability, but also risk for non compliance for those who do not step up to the new rule.
Israeli AI companies will do well to stay ahead of the emerging AI regulatory landscape to build compliance into systems starting now, to secure further investment and maintain market leading growth in this fast moving industry.
Andrew Dyson is a partner at the DLA Piper Intellectual Property and Technology group, where he co-chairs the firm’s global Data Protection, Privacy and Security practice.
Ron Feingold is an Intern at the DLA Piper Israel Country Group.