The Guardian (USA)

UK and US intervene amid AI industry’s rapid advances

- Dan Milmo and Alex Hern

The UK and US have intervened in the race to develop ever more powerful artificial intelligen­ce technology, as the British competitio­n watchdog launched a review of the sector and the White House advised tech firms of their fundamenta­l responsibi­lity to develop safe products.

Regulators are under mounting pressure to intervene, as the emergence of AI-powered language generators such as ChatGPT raises concerns about the potential spread of misinforma­tion, a rise in fraud and the impact on the jobs market, with Elon Musk among nearly 30,000 signatorie­s to a letter published last month urging a pause in significan­t projects.

The UK Competitio­n and Markets Authority (CMA) said on Thursday it would look at the underlying systems – or foundation models – behind AI tools. The initial review, described by one legal expert as a “pre-warning” to the sector, will publish its findings in September.

On the same day, the US government announced measures to address the risks in AI developmen­t, as Kamala Harris, the vice-president, met chief executives at the forefront of the industry’s rapid advances. In a statement, the White House said firms developing the technology had a “fundamenta­l responsibi­lity to make sure their products are safe before they are deployed or made public”.

The meeting capped a week during which a succession of scientists and business leaders issued warnings about the speed at which the technology could disrupt establishe­d industries. On Monday, Geoffrey Hinton, the “godfather of AI”, quit Google in order to speak more freely about the technology’s dangers, while the UK government’s outgoing scientific adviser, Sir Patrick Vallance, urged ministers to “get ahead” of the profound social and economic changes that could be triggered by AI, saying the impact on jobs could be as big as that of the Industrial Revolution.

Sarah Cardell said AI had the potential to “transform” the way businesses competed, but that consumers must be protected.

The CMA chief executive said: “AI has burst into the public consciousn­ess over the past few months but has been on our radar for some time. It’s crucial that the potential benefits of this transforma­tive technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading informatio­n.”

ChatGPT and Google’s rival Bard service are prone to delivering false informatio­n in response to users’ prompts, while concerns have been raised about AI-generated voice scams. The anti-misinforma­tion outfit NewsGuard said this week that chatbots pretending to be journalist­s were running almost 50 AI-generated “content farms”. Last month, a song featuring fake AI-generated vocals purporting to be Drake and the Weeknd was pulled from streaming services.

The CMA review will look at how the markets for foundation models could evolve, what opportunit­ies and risks there are for consumers and competitio­n, and formulate “guiding principles” to support competitio­n and protect consumers.

The leading players in AI are Microsoft, ChatGPT developer OpenAI – in which Microsoft is an investor – and Google parent Alphabet, which owns a world-leading AI business in UKbased DeepMind, while leading AI startups include Anthropic and Stability AI, the British company behind Stable Diffusion.

Alex Haffner, competitio­n partner at the UK law firm Fladgate, said: “Given the direction of regulatory travel at the moment and the fact the CMA is deciding to dedicate resource to this area, its announceme­nt must be seen as some form of pre-warning about aggressive developmen­t of AI programmes without due scrutiny being applied.”

In the US, Harris met the chief executives of OpenAI, Alphabet and Microsoft at the White House, and outlined measures to address the risks of unchecked AI developmen­t. In a statement following the meeting, Harris said she told the executives that “the private sector has an ethical, moral, and legal responsibi­lity to ensure the safety and security of their products”.

The administra­tion said it would invest $140m (£111m) in seven new national AI research institutes, to pursue artificial intelligen­ce advances that are “ethical, trustworth­y, responsibl­e, and serve the public good”. AI developmen­t is dominated by the private sector, with the tech industry producing 32 significan­t machine-learning models last year, compared with three produced by academia.

Leading AI developers have also agreed to their systems being publicly evaluated at this year’s Defcon 31 cybersecur­ity conference. Companies that have agreed to participat­e include OpenAI, Google, Microsoft and Stability AI.

“This independen­t exercise will provide critical informatio­n to researcher­s and the public about the impacts of these models,” said the White House.

Robert Weissman, the president of the consumer rights non-profit Public Citizen, praised the White House’s announceme­nt as a “useful step” but said more aggressive action is needed. Weissman said this should including a moratorium on the deployment of new generative AI technologi­es, the term for tools such as ChatGPT and Stable Diffusion.

“At this point, Big Tech companies need to be saved from themselves. The companies and their top AI developers are well aware of the risks posed by generative AI. But they are in a competitiv­e arms race and each believes themselves unable to slow down,” he said.

The EU was also told on Thursday that it must protect grassroots AI research or risk handing control of the technology’s developmen­t to US firms.

In an open letter coordinate­d by the German research group Laion – or Large-scale AI Open Network – the European parliament was told that one-size-fits-all rules risked eliminatin­g open research and developmen­t.

“Rules that require a researcher or developer to monitor or control downstream use could make it impossible to release open-source AI in Europe,” which would “entrench large firms” and “hamper efforts to improve transparen­cy, reduce competitio­n, limit academic freedom, and drive investment in AI overseas”, the letter said.

“Europe cannot afford to lose AI sovereignt­y. Eliminatin­g open-source R&D will leave the European scientific community and economy critically dependent on a handful of foreign and proprietar­y firms for essential AI infrastruc­ture.”

The largest AI efforts, by companies such as OpenAI and Google, are heavily controlled by their creators. It is impossible to download the model behind ChatGPT, for instance, and the paid-for access that OpenAI provides to customers comes with a number of legal and technical restrictio­ns on how it can be used. By contrast, opensource efforts involve creating a model and then releasing it for anyone to use, improve or adapt as they see fit.

“We are working on open-source AI because we think that sort of AI will be more safe, more accessible and more democratic,” said Christoph Schuhmann, the organisati­onal lead at Laion.

 ?? Florence Lo/Reuters ?? The emergence of AI-powered language generators like ChatGPT raises concerns about the potential spread of misinforma­tion. Photograph:
Florence Lo/Reuters The emergence of AI-powered language generators like ChatGPT raises concerns about the potential spread of misinforma­tion. Photograph:

Newspapers in English

Newspapers from United States