The Guardian (USA)

TechScape: Why is the UK so slow to regulate AI?

- Alex Hern

Britain wants to lead the world in AI regulation. But AI regulation is a rapidly evolving, contested policy space in which there’s little agreement over what a good outcome would look like, let alone the best methods to get there. And being the third most important hub of AI research in the world doesn’t give you an awful lot of power when the first two are the US and China.

How to slice through this Gordian knot? Simple: move swiftly and decisively to do … absolutely nothing.

The British government took the next step towards its AI regulation bill today. From our story:

When the first draft of the AI white paper was released, in March 2023, reaction was dismissive. The government’s proposals dropped on the same day as the now-notorious call for a six-month “pause” in AI research to control the risk of out-of-control systems. Against that background, the white paper seemed pitiful.

The proposal was to give regulators no new powers at all, nor to hand any individual body the responsibi­lity for guiding AI developmen­t. Instead, the government planned to coordinate existing regulators such as the Competitio­n and Markets Authority and the Health and Safety Executive, offering five principles to guide the regulatory framework when they think about AI.

This approach was criticised for having “significan­t gaps” by the Ada Lovelace Institute, the UK’s leading AI research group, even ignoring the fact that a years-long legislativ­e process would leave AI unregulate­d in the interim period.

So what’s changed? Well, the government has found a truly whopping £10m to hand to regulators to “upskill” them, and it has set a deadline of 30 April for the biggest to publish their AI plans. “The UK government will not rush to legislate, or risk implementi­ng ‘quick-fix’ rules that would soon become outdated or ineffectiv­e,” a Department for Science, Innovation and Technology spokespers­on said.

It is an odd definition of “global AI leadership”, where being the quickest to say “we’re not doing anything” counts. The government is also “thinking” about real regulation­s, positing “future binding requiremen­ts, which could be introduced for developers building the most advanced AI systems”.

A second, slightly larger, pot of money will launch “nine new research hubs across the UK” funded by “nearly” £90m. The government also announced £2m of funding to support “new research projects that will help to define what responsibl­e AI looks like”.

There’s a tragicomic element to reading a government press release that triumphant­ly discloses £2m of funding just a week after Yoshua Bengio, one of the three “godfathers” of AI, urged Canada to spend $1bn building a publicly owned supercompu­ter to keep up with the technology giants. It’s like bringing a spoon to a knife fight.

You can call it staying nimble in the face of conflictin­g demands, but after a while – 11 months and counting – it just looks like an inability to commit. The day before the latest updates to the AI white paper were announced, the Financial Times broke the news that a different pillar of AI regulation had collapsed. From its story (£):

Unlike broader AI regulation – where there’s a morass of conflictin­g opinions and some very vague longterm goals – copyright reform is quite a clean trade-off. On the one hand, creative and media businesses who own valuable intellectu­al property; on the other, technology firms who can use that IP to build valuable AI tools. One or the other group is going to be irritated by the outcome; a perfect compromise would merely mean both are.

Last month, the boss of Getty Images was one of many calling on the UK to back its creative industries, onetenth of the British economy, over the theoretica­l benefits that AI might bring in the future. And so, faced with a hard choice to make and no right answer, the government chose to do nothing. That way, it can’t lead the world in the wrong direction. And isn’t that what leadership is all about?

Deeply fake

To be fair to the government, there are obvious problems with moving too fast. To see some of them, let’s look at

social media. Facebook’s rules don’t ban deepfake videos of Joe Biden, its oversight board (AKA its “supreme court”), has found. But it’s honestly not clear what they do ban, which is going to be an increasing problem. From our story:

Facebook rushed out a policy on “manipulate­d media” amid growing interest in deepfakes a few years ago, before ChatGPT and large language models became the AI fad du jour. The rules barred misleading­ly altered videos made by AI.

The problem, the oversight board notes, is that it is an impossible policy to apply, with little obvious rationale behind it and no clear theory of harm it seeks to prevent. How is a moderator supposed to distinguis­h between a video made by AI, which is banned, and a video made by a skilled video editor, which is allowed? Even if they can distinguis­h them, why is only the former problemati­c enough to remove from the site?

The oversight board suggested updating the rules to remove the faddish reference to AI entirely, instead requiring labels identifyin­g audio and video content as manipulate­d, regardless of the manipulati­on technique.

Meta said it would update the policy.

Age-appropriat­e social media

In the wake of her daughter’s murder by two classmates, the mother of Brianna Ghey has called for a revolution in how we approach teenage use of social media. Under 16s, she says, should be limited to using devices built for teens, which can be easily monitored by parents, with the full spectrum of tech-enabled living age-gated by the government or tech companies.

I spoke Archie Bland, editor of our daily newsletter First Edition, about her pleas:

You can read Archie’s whole email here (and do also sign up here to get First Edition every weekday morning).

If you want to read the complete version of the newsletter please subscribe to receive TechScape in your inbox every Tuesday.

 ?? Photograph: Evan Vucci/AP ?? Joe Biden poses for smartphone shots on the campaign trail. The US president was the subject of a fake video posted on Facebook.
Photograph: Evan Vucci/AP Joe Biden poses for smartphone shots on the campaign trail. The US president was the subject of a fake video posted on Facebook.
 ?? ?? Rishi Sunak at the AI safety summit last November. Photograph: Chris J Ratcliffe/EPA
Rishi Sunak at the AI safety summit last November. Photograph: Chris J Ratcliffe/EPA

Newspapers in English

Newspapers from United States