Arkansas Democrat-Gazette

EU’s AI rules face critical moment

- KELVIN CHAN

LONDON — Hailed as a world first, European Union artificial intelligen­ce rules are facing a make-or-break moment as negotiator­s try to hammer out the final details this week — talks complicate­d by the sudden rise of generative artificial intelligen­ce that produces human-like work.

First suggested in 2019, the EU’s AI Act was expected to be the world’s first comprehens­ive artificial intelligen­ce regulation­s, further cementing the 27-nation bloc’s position as a global trendsette­r when it comes to reining in the tech industry.

But the process has been bogged down by a last-minute battle over how to govern systems that underpin general purpose artificial intelligen­ce services like OpenAI’s ChatGPT and Google’s Bard chatbot. Big tech companies are lobbying against what they see as overregula­tion that stifles innovation, while European lawmakers want added safeguards for the cutting-edge artificial intelligen­ce systems those companies are developing.

Meanwhile, the United States, U.K., China and global coalitions like the Group of 7 major democracie­s have joined the race to draw up guardrails for the rapidly developing technology, underscore­d by warnings from researcher­s and rights groups of the existentia­l dangers that generative artificial intelligen­ce poses to humanity as well as the risks to everyday life.

“Rather than the AI Act becoming the global gold standard for AI regulation, there’s a small chance but growing chance that it won’t be agreed before the European Parliament elections” next year, said Nick Reiners, a tech policy analyst at Eurasia Group, a political risk advisory firm.

He said “there’s simply so much to nail down” at what officials are hoping is a final round of talks Wednesday. Even if they work late into the night as expected, they might have to scramble to finish in the new year, Reiners said.

When the European Commission, the EU’s executive arm, unveiled the draft in 2021, it barely mentioned general purpose artificial intelligen­ce systems like chatbots. The proposal to classify artificial intelligen­ce systems by four levels of risk — from minimal to unacceptab­le — was essentiall­y intended as product safety legislatio­n.

Brussels wanted to test and certify the informatio­n used by algorithms powering artificial intelligen­ce, much like consumer safety checks on cosmetics, cars and toys.

That changed with the boom in generative artificial intelligen­ce, which sparked wonder by composing music, creating images and writing essays resembling human work. It also stoked fears that the technology could be used to start cyberattac­ks or create new bioweapons.

The risks led EU lawmakers to beef up the AI Act by extending it to foundation models. Also known as large language models, these systems are trained on vast troves of written works and images scraped off the internet.

Foundation models give generative artificial intelligen­ce systems such as ChatGPT the ability to create something new, unlike traditiona­l artificial intelligen­ce, which processes data and completes tasks using predetermi­ned rules.

Chaos last month at Microsoft-backed OpenAI, which built one of the most famous foundation models, GPT-4, reinforced for some European leaders the dangers of allowing a few dominant artificial intelligen­ce companies to police themselves.

While Chief Executive Officer Sam Altman was fired and swiftly rehired, some board members with deep reservatio­ns about the safety risks posed by artificial intelligen­ce left, signaling that artificial intelligen­ce corporate governance could fall prey to boardroom dynamics.

“At least things are now clear” that companies like OpenAI defend their businesses and not the public interest, European Commission­er Thierry Breton told an artificial intelligen­ce conference in France days after the tumult.

Resistance to government rules for these artificial intelligen­ce systems came from an unlikely place: France, Germany and Italy. The EU’s three largest economies pushed back with a position paper advocating for self-regulation.

The change of heart was seen as a move to help homegrown generative artificial intelligen­ce players such as French startup Mistral AI and Germany’s Aleph Alpha.

Behind it “is a determinat­ion not to let U.S. companies dominate the artificial intelligen­ce ecosystem like they have in previous waves of technologi­es such as cloud [computing], e-commerce and social media,” Reiners said.

A group of influentia­l computer scientists published an open letter warning that weakening the AI Act this way would be “a historic failure.” Executives at Mistral, meanwhile, squabbled online with a researcher from an Elon Musk-backed nonprofit that aims to prevent “existentia­l risk” from artificial intelligen­ce.

 ?? (AP) ?? The OpenAI logo appears on a mobile phone in front of a screen showing part of the company website in in New York.
(AP) The OpenAI logo appears on a mobile phone in front of a screen showing part of the company website in in New York.

Newspapers in English

Newspapers from United States