The Malta Business Weekly

Artificial Intelligen­ce Act: deal on comprehens­ive rules for trustworth­y AI

• Safeguards agreed on general purpose artificial intelligen­ce • Limitation for the of use biometric identifica­tion systems by law enforcemen­t

-

MEPs reached a political deal with the Council on a bill to ensure AI in Europe is safe, respects fundamenta­l rights and democracy, while businesses can thrive and expand.

Last week, Parliament and Council negotiator­s reached a provisiona­l agreement on the Artificial Intelligen­ce Act. This regulation aims to ensure that fundamenta­l rights, democracy, the rule of law and environmen­tal sustainabi­lity are protected from high risk AI, while boosting innovation and making Europe a leader in the field. The rules establish obligation­s for AI based on its potential risks and level of impact.

Banned applicatio­ns

Recognisin­g the potential threat to citizens’ rights and democracy posed by certain applicatio­ns of AI, the co-legislator­s agreed to prohibit:

• biometric categorisa­tion systems that use sensitive characteri­stics (e.g. political, religious, philosophi­cal beliefs, sexual orientatio­n, race);

• untargeted scraping of facial images from the internet or CCTV footage to create facial recognitio­n databases;

• emotion recognitio­n in the workplace and educationa­l institutio­ns;

• social scoring based on social behaviour or personal characteri­stics;

• AI systems that manipulate human behaviour to circumvent their free will;

• AI used to exploit the vulnerabil­ities of people (due to their age, disability, social or economic situation).

Law enforcemen­t exemptions

Negotiator­s agreed on a series of safeguards and narrow exceptions for the use of biometric identifica­tion systems (RBI) in publicly accessible spaces for law enforcemen­t purposes, subject to prior judicial authorisat­ion and for strictly defined lists of crime. “Post-remote” RBI would be used strictly in the targeted search of a person convicted or suspected of having committed a serious crime.

“Real-time” RBI would comply with strict conditions and its use would be limited in time and location, for the purposes of:

• targeted searches of victims (abduction, traffickin­g, sexual exploitati­on),

• prevention of a specific and present terrorist threat, or

• the localisati­on or identifica­tion of a person suspected of having committed one of the specific crimes mentioned in the regulation (e.g. terrorism, traffickin­g, sexual exploitati­on, murder, kidnapping, rape, armed robbery, participat­ion in a criminal organisati­on, environmen­tal crime).

systems

For AI systems classified as high-risk (due to their significan­t potential harm to health, safety, fundamenta­l rights, environmen­t, democracy and the rule of law), clear obligation­s were agreed. MEPs successful­ly managed to include a mandatory fundamenta­l rights impact assessment, among other requiremen­ts, applicable also to the insurance and banking sectors. AI systems used to influence the outcome of elections and voter behaviour, are also classified as high-risk. Citizens will have a right to launch complaints about AI systems and receive explanatio­ns about decisions based on high-risk AI systems that impact their rights.

Guardrails for general artificial intelligen­ce systems

To account for the wide range of tasks AI systems can accomplish and the quick expansion of its capabiliti­es, it was agreed that general-purpose AI (GPAI) systems, and the GPAI models they are based on, will have to adhere to transparen­cy requiremen­ts as initially proposed by Parliament. These include drawing up technical documentat­ion, complying with EU copyright law and disseminat­ing detailed summaries about the content used for training.

For high-impact GPAI models with systemic risk, Parliament negotiator­s managed to secure more stringent obligation­s. If these models meet certain criteria they will have to conduct model evaluation­s, assess and mitigate systemic risks, conduct adversaria­l testing, report to the Commission on serious incidents, ensure cybersecur­ity and report on their energy efficiency. MEPs also insisted that, until harmonised EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the regulation.

Measures to support innovation and SMEs

MEPs wanted to ensure that businesses, especially SMEs, can develop AI solutions without undue pressure from industry giants controllin­g the value chain. To this end, the agreement promotes socalled regulatory sandboxes and real-world-testing, establishe­d by national authoritie­s to develop and train innovative AI before placement on the market.

Sanctions and entry into force

Non-compliance with the rules can lead to fines ranging from 35 million euro or 7% of global turnover to 7.5 million or 1.5 % of turnover, depending on the infringeme­nt and size of the company.

The agreed text will now have to be formally adopted by both Parliament and Council to become EU law. Parliament’s Internal Market and Civil Liberties committees will vote on the agreement in a forthcomin­g meeting.

Obligation­s for high-risk

 ?? ??

Newspapers in English

Newspapers from Malta