The Malta Business Weekly

Artificial Intelligen­ce Act: MEPs adopt landmark law

• Safeguards on general purpose artificial intelligen­ce • Limits on the use of biometric identifica­tion systems by law enforcemen­t

-

Yesterday the European Parliament approved the Artificial Intelligen­ce Act that ensures safety and compliance with fundamenta­l rights, while boosting innovation.

The regulation, agreed in negotiatio­ns with member states in last December, was endorsed by MEPs with 523 votes in favour, 46 against and 49 abstention­s.

It aims to protect fundamenta­l rights, democracy, the rule of law and environmen­tal sustainabi­lity from high-risk AI, while boosting innovation and establishi­ng Europe as a leader in the field. The regulation establishe­s obligation­s for AI based on its potential risks and level of impact.

The regulation is still subject to a final lawyer-linguist check and is expected to be finally adopted before the end of the legislatur­e (through the socalled corrigendu­m procedure). The law also needs to be formally endorsed by the Council.

It will enter into force 20 days after its publicatio­n in the official Journal, and be fully applicable 24 months after its entry into force, except for: bans on prohibited practises, which will apply six months after the entry into force date; codes of practise (nine months after entry into force); general-purpose AI rules including governance (12 months after entry into force); and obligation­s for high-risk systems (36 months).

Banned applicatio­ns

The new rules ban certain AI applicatio­ns that threaten citizens’ rights, including biometric categorisa­tion systems based on sensitive characteri­stics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognitio­n databases. Emotion recognitio­n in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteri­stics), and AI that manipulate­s human behaviour or exploits people’s vulnerabil­ities will also be forbidden.

Law enforcemen­t exemptions

The use of biometric identifica­tion systems (RBI) by law enforcemen­t is prohibited in principle, except in exhaustive­ly listed and narrowly defined situations. “Real-time” RBI can only be deployed if strict safeguards are met, e.g. its use is limited in time and geographic scope and subject to specific prior judicial or administra­tive authorisat­ion. Such uses may include, for example, a targeted search of a missing person or preventing a terrorist attack. Using such systems post-facto (“post-remote RBI”) is considered a high-risk use case, requiring judicial authorisat­ion being linked to a criminal offence.

Obligation­s for high-risk systems

Clear obligation­s are also foreseen for other high-risk AI systems (due to their significan­t potential harm to health, safety, fundamenta­l rights, environmen­t, democracy and the rule of law). Examples of highrisk AI uses include critical in

frastructu­re, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcemen­t, migration and border management, justice and democratic processes (e.g. influencin­g elections). Such systems must assess and reduce risks, maintain use logs, be transparen­t and accurate, and ensure human oversight. Citizens will have a right to submit complaints about AI systems and receive explanatio­ns about decisions based on high-risk AI systems that affect their rights.

Transparen­cy requiremen­ts

General-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparen­cy requiremen­ts, including compliance with EU copyright law and publishing detailed summaries of the content used for training. The more powerful GPAI models that could pose systemic

risks will face additional requiremen­ts, including performing model evaluation­s, assessing and mitigating systemic risks, and reporting on incidents.

Additional­ly, artificial or manipulate­d images, audio or video content (“deepfakes”) need to be clearly labelled as such.

Measures to support innovation and SMEs

Regulatory sandboxes and real-world testing will have to be establishe­d at the national level, and made accessible to SMEs and start-ups, to develop and train innovative AI before its placement on the market.

“We finally have the world’s first binding law on artificial intelligen­ce, to reduce risks, create opportunit­ies, combat discrimina­tion, and bring transparen­cy. Thanks to Parliament, unacceptab­le AI practices will be banned in Europe and the rights of workers and

citizens will be protected. The AI Office will now be set up to support companies to start complying with the rules before they enter into force. We ensured that human beings and European values are at the very centre of AI’s developmen­t, ” said Internal Market Committee co-rapporteur Brando Benifei, During the plenary debate on Tuesday.

Civil Liberties Committee corapporte­ur Dragos Tudorache said: “The EU has delivered. We have linked the concept of artificial intelligen­ce to the fundamenta­l values that form the basis of our societies. However, much work lies ahead that goes beyond the AI Act itself. AI will push us to rethink the social contract at the heart of our democracie­s, our education models, labour markets, and the way we conduct warfare. The AI Act is a starting point for a new model of governance built around technology. We must now focus on putting this law into practice”.

 ?? ??

Newspapers in English

Newspapers from Malta