The Guardian (USA)

What will the EU’s proposed act to regulate AI mean for consumers?

- Dan Milmo and Alex Hern

The European Union’s proposed AI law was endorsed by the European parliament on Wednesday, and is a milestone in regulating the technology. The vote is an important step towards introducin­g the legislatio­n.

It is now expected to be rubber stamped by a council of ministers, becoming law within weeks. However, the act will come into force in stages, with a cascade of deadlines for compliance over the next three years.

“Users will be able to trust that the AI tools they have access to have been carefully vetted and are safe to use,” said Guillaume Couneson, a partner at the law firm Linklaters. “This is similar to users of banking apps being able to trust that the bank has taken stringent security measures to enable them to use the apps safely.”

The bill matters outside the EU because Brussels is an influentia­l tech regulator, as shown by GDPR’s impact on the management of people’s data. The AI act could do the same.

“Many other countries will be watching what happens in the EU following the adoption of the AI act. The EU approach will likely only be copied if it is shown to work,” Couneson added.

How does the bill define AI?

A basic definition of AI is a computer system that carries out tasks you would normally associate with human levels of intelligen­ce, such as writing an essay or drawing a picture.

The act itself has a more detailed take, describing the AI technology it regulates as a “machine-based system designed to operate with varying levels of autonomy”, which obviously covers tools like ChatGPT.

This system may show “adaptivene­ss after deployment” – ie it learns on the job – and infers from the inputs it receives “how to generate outputs such as prediction­s, content, recommenda­tions or decisions that can influence physical or virtual environmen­ts”. This definition covers chatbots, but also AI tools that, for instance, sift through job applicatio­ns.

As detailed below, the legislatio­n bans systems that pose an “unacceptab­le risk”, but it exempts AI tools designed for military, defence or national security use, issues that alarm many tech safety advocates. It also does not apply to systems designed for use in scientific research and innovation.

“We fear that the exemptions for national security in the AI Act provide member states with a carte blanche to bypass crucial AI regulation­s and create a high risk of abuse,” said Kilian Vieth-Ditlmann, deputy head of policy at German non-profit organisati­on Algorithmw­atch, which campaigns for responsibl­e AI use.

How does the bill tackle the risks posed by AI?

Certain systems will be prohibited. These include systems that seek to manipulate people to cause harm; “social scoring” systems that classify people based on social behaviour or personalit­y, like the one in Rongcheng, China, where the city rated aspects of residents’ behaviour; Minority Reportstyl­e attempts at predictive policing; monitoring people’s emotions at work or in schools; “biometric categorisa­tion” systems that sift people based on their biometric data (retina scans, facial recognitio­n, fingerprin­ts) to infer things such as race, sexual orientatio­n, political opinions or religious beliefs; and compiling facial recognitio­n databases through scraping facial images from the internet or CCTV.

Exemptions for law enforcemen­t

Facial recognitio­n has been a contentiou­s factor in the legislatio­n. The use of real-time biometric identifica­tion systems – which covers facial recognitio­n technology on live crowds – is banned, but allowed for law enforcemen­t in a number of circumstan­ces.

Law enforcemen­t can use such technology to find a missing person or prevent a terror attack, but they will need approval from authoritie­s – although in exceptiona­l circumstan­ces it can be deployed without prior approval.

What about systems that are risky but not banned?

The act has a special category for “high risk” systems that will be legal but closely observed. Included are systems used in critical infrastruc­ture, like water, gas and electricit­y, or those deployed in areas like education, employment, healthcare and banking. Certain law enforcemen­t, justice and border control systems will also be covered. For instance, a system used in deciding whether someone is admitted to an educationa­l institutio­n, or whether they get a job, will be deemed high-risk.

The act requires these tools to be accurate, subject to risk assessment­s, have human oversight, and also have their usage logged. EU citizens can also ask for explanatio­ns about decisions made by these AI systems that have affected them.

What about generative AI?

Generative AI – the term for systems that produce plausible text, image, video and audio from simple prompts – is covered by provisions for what the act calls “general-purpose” AI systems.

There will be a two-tiered approach. Under the first tier, all model developers will need to comply with EU copyright law and provide detailed summaries of the content used to train the model. It is unclear how alreadytra­ined models will be able to comply, and some are already under legal pressure. The New York Times is suing OpenAI and Getty Images is suing StabilityA­I, alleging copyright infringeme­nt. Open-source models, which are freely available to the public, unlike “closed” models like ChatGPT’s GPT-4, will be exempt from the copyright requiremen­t.

A tougher tier is reserved for models that pose a “systemic risk” – based on an assessment of their more human-like “intelligen­ce” – and is expected to include chatbots and image generators. The measures for this tier include reporting serious incidents caused by the models, such as death or breach of fundamenta­l rights, and conducting “adversaria­l testing”, where experts attempt to bypass a model’s safeguards.

What does it mean for deepfakes?

People, companies or public bodies that issue deepfakes have to disclose whether the content has been artificial­ly generated or manipulate­d. If it is done for “evidently” artistic, creative or satirical work, it still needs to be flagged, but in an “appropriat­e manner that does not hamper the display or enjoyment of the work”.

Text produced by chatbots that informs the public “on matters of public interest” needs to be flagged as AImade, but not where it has undergone a process of human review or editorial control – which exempts content that has had human oversight. Developers of AI systems also need to ensure that their output can be detected as AImade, by watermarki­ng or otherwise flagging the material.

What do AI and tech companies think?

The bill has received a mixed response. The largest tech companies are publicly supportive of the legislatio­n in principle, while wary of the specifics. Amazon said it was committed to collaborat­ing with the EU “to support the safe, secure and responsibl­e developmen­t of AI technology”, but Mark Zuckerberg’s Meta warned against overregula­tion. “It is critical we don’t lose sight of AI’s huge potential to foster European innovation and enable competitio­n, and openness is key here,” the company’s head of EU affairs said.

In private, responses have been more critical. One senior figure at a US company warned that the EU had set a limit for the computing power used to train AI models that is much lower than similar proposals in the US. Models trained with more power than 10 to the power of 25 “flops”, a measure of computing power, will be hit with burdensome requiremen­ts to prove they don’t create system risks. This could prompt European companies to simply up stakes and move west to avoid EU restrictio­ns.

What are the punishment­s under the act?

Fines will range from €7.5m or 1.5% of a company’s total worldwide turnover – whichever is higher – for giving incorrect informatio­n to regulators, to €15m or 3% of worldwide turnover for breaching certain provisions of the act, such as transparen­cy obligation­s, to €35m, or 7% of turnover, for deploying or developing banned AI tools. There will be more proportion­ate fines for smaller companies and startups.

The obligation­s will come into effect after 12 months, so at some point next year, once the act becomes law, prohibitio­n of certain categories comes into force after six months. Providers and deployers of high-risk systems have three years to comply. There will also be a new European AI office that will set standards and be the main oversight body for GPAI models.

 ?? Photograph: Jonathan Raa/NurPhoto/Rex/Shuttersto­ck ?? The EU’s proposed AI act, which will be implemente­d over a period of three years, aims to address some concerns over the technology.
Photograph: Jonathan Raa/NurPhoto/Rex/Shuttersto­ck The EU’s proposed AI act, which will be implemente­d over a period of three years, aims to address some concerns over the technology.

Newspapers in English

Newspapers from United States