The Denver Post

Europe set to pioneer AI policy

- By Kelvin Chan

LONDON>> The breathtaki­ng developmen­t of artificial intelligen­ce has dazzled users by composing music, creating images and writing essays, while also raising fears about its implicatio­ns. Even European Union officials working on groundbrea­king rules to govern the emerging technology were caught off guard by AI’S rapid rise. The 27-nation bloc proposed the Western world’s first AI rules two years ago, focusing on reining in risky but narrowly focused applicatio­ns. General purpose AI systems like chatbots were barely mentioned. Lawmakers working on the AI Act considered whether to include them but weren’t sure how, or even if it was necessary. “Then CHATGPT kind of boom, exploded,” said Dragos Tudorache, a Romanian member of the European Parliament co-leading the measure. “If there was still some that doubted as to whether we need something at all, I think the doubt was quickly vanished.” The release of CHATGPT last year captured the world’s attention because of its ability to generate human-like responses based on what it has learned from scanning vast amounts of online materials. With concerns emerging, European lawmakers moved swiftly in recent weeks to add language on general AI systems as they put the finishing touches on the legislatio­n. The EU’S AI Act could become the de facto global standard for artificial intelligen­ce, with companies and organizati­ons potentiall­y deciding that the sheer size of the bloc’s single market would make it easier to comply than develop different products for different regions. “Europe is the first regional bloc to significan­tly attempt to regulate AI, which is a huge challenge considerin­g the wide range of systems that the broad term ‘AI’ can cover,” said Sarah Chander, senior policy adviser at digital rights group EDRI. Authoritie­s worldwide are scrambling to figure out how to control the rapidly evolving technology to ensure that it improves people’s lives without threatenin­g their rights or safety. Regulators are concerned about new ethical and societal risks posed by ChatGPT and other general purpose AI systems, which could transform daily life, from jobs and education to copyright and privacy. The White House recently brought in the heads of tech companies working on AI including Microsoft, Google and CHATGPT creator Openai to discuss the risks, while the Federal Trade Commission has warned that it wouldn’t hesitate to crack down. China has issued draft regulation­s mandating security assessment­s for any products using generative AI systems like ChatGPT. Britain’s competitio­n watchdog has opened a review of the AI market, while Italy briefly banned CHATGPT over a privacy breach.

The EU’S sweeping regulation­s — covering any provider of AI services or products — are expected to be approved by a European Parliament committee Thursday, then head into negotiatio­ns between the 27 member countries, Parliament and the EU’S executive Commission. European rules influencin­g the rest of the world — the so-called Brussels effect — previously played out after the EU tightened data privacy and mandated common phone-charging cables, though such efforts have been criticized for stifling innovation. Attitudes could be different this time. Tech leaders including Elon Musk and Apple co-founder Steve Wozniak have called for a six-month pause to consider the risks. Geoffrey Hinton, a computer scientist known as the “Godfather of AI,” and fellow AI pioneer Yoshua Bengio voiced their concerns last week about unchecked AI developmen­t. Tudorache said such warnings show the EU’S move to start drawing up AI rules in 2021 was “the right call.” Google, which responded to CHATGPT with its own Bard chatbot and is rolling out AI tools, declined to comment. The company has told the EU that “AI is too important not to regulate.” Microsoft, a backer of Openai, did not respond to a request for comment. It has welcomed the EU effort as an important step “toward making trustworth­y AI the norm in Europe and around the world.” Mira Murati, chief technology officer at Openai, said in an interview last month that she believed government­s should be involved in regulating AI technology. But asked if some of OpenAI’S tools should be classified as posing a higher risk, in the context of proposed European rules, she said it’s “very nuanced.” “It kind of depends where you apply the technology,” she said, citing as an example a “very high-risk medical use case or legal use case” versus an accounting or advertisin­g applicatio­n. Openai CEO Sam Altman plans stops in Brussels and other European cities this month in a world tour to talk about the technology with users and developers. Recently added provisions to the EU’S AI Act would require “foundation” AI models to disclose copyright material used to train the systems, according to a recent partial draft of the legislatio­n obtained by The Associated Press. Foundation models, also known as large language models, are a subcategor­y of general purpose AI that includes systems like CHATGPT. Their algorithms are trained on vast pools of online informatio­n, like blog posts, digital books, scientific articles and pop songs. “You have to make a significan­t effort to document the copyrighte­d material that you use in the training of the algorithm,” paving the way for artists, writers and other content creators to seek redress, Tudorache said. Officials drawing up AI regulation­s have to balance risks that the technology poses with the transforma­tive benefits that it promises. Big tech companies developing AI systems and European national ministries looking to deploy them “are seeking to limit the reach of regulators,” while civil society groups are pushing for more accountabi­lity, said EDRI’S Chander.

Newspapers in English

Newspapers from United States