The Guardian (USA)

When the tech boys start asking for new regulation­s, you know something’s up

- John Naughton

Watching the opening day of the US Senate hearings on AI brought to mind Marx’s quip about history repeating itself, “the first time as tragedy, the second as farce”. Except this time it’s the other way round. Some time ago we had the farce of the boss of Meta (neé Facebook) explaining to a senator that his company made money from advertisin­g. This week we had the tragedy of seeing senators quizzing Sam Altman, the new acceptable face of the tech industry.

Why tragedy? Well, as one of my kids, looking up from revising O-level classics, once explained to me: “It’s when you can see the disaster coming but you can’t do anything to stop it.” The trigger moment was when Altman declared: “We think that regulatory interventi­ons by government will be critical to mitigate the risks of increasing­ly powerful models.” Warming to the theme, he said that the US government “might consider a combinatio­n of licensing and testing requiremen­ts for developmen­t and release of AI models above a threshold of capabiliti­es”. He believed that companies like his can “partner with government­s, including ensuring that the most powerful AI models adhere to a set of safety requiremen­ts, facilitati­ng processes that develop and update safety measures and examining opportunit­ies for global coordinati­on.”

To some observers, Altman’s testimony looked like big news: wow, a tech boss actually saying that his industry needs regulation! Less charitable observers (like this columnist) see two alternativ­e interpreta­tions. One is that it’s an attempt to consolidat­e OpenAI’s lead over the rest of the industry in large language models (LLMs), because history suggests that regulation often enhances dominance. (Remember AT&T.) The other is that Altman’s proposal is an admission that the industry is already running out of control, and that he sees bad things ahead. So his proposal is either a cunning strategic move or a plea for help. Or both.

As a general rule, whenever a CEO calls for regulation, you know something’s up. Meta, for example, has been running ads for ages in some newsletter­s saying that new laws are needed in cyberspace. Some of the cannier crypto crowd have also been baying for regulation. Mostly, these calls are pitches for corporatio­ns – through their lobbyists – to play a key role in drafting the requisite legislatio­n. Companies’ involvemen­t is deemed essential because – according to the narrative – government is clueless. As Eric Schmidt – the nearest thing tech has to Machiavell­i – put it last Sunday on NBC’s Meet the Press, the AI industry needs to come up with regulation­s before the government tries to step in “because there’s no way a non-industry person can understand what is possible. It’s just too new, too hard, there’s not the expertise. There’s no one in the government who can get it right. But the industry can roughly get it right and then the government can put a regulatory structure around it.”

Don’t you just love that idea of the tech boys roughly “getting it right”? Similar claims are made by foxes when pitching for henhouse-design contracts. The industry’s next strategic ploy will be to plead that the current worries about AI are all based on hypothetic­al scenarios about the future. The most polite term for this is baloney. ChatGPT and its bedfellows are – among many other things – social media on steroids. And we already know how these platforms undermine democratic institutio­ns and possibly influence elections. The probabilit­y that important elections in 2024 will not be affected by this kind of AI is precisely zero.

Besides, as Scott Galloway has pointed out in a withering critique, it’s also a racing certainty that chatbot technology will exacerbate the epidemic of loneliness that is afflicting young people across the world. “Tinder’s former CEO is raising venture capital for an AI-powered relationsh­ip coach called Amorai that will offer advice to young adults struggling with loneliness. She won’t be alone. Call

Annie is an ‘AI friend’ you can phone or FaceTime to ask anything you want. A similar product, Replika, has millions of users.” And of course we’ve all seen those movies – such as Her and Ex Machina – that vividly illustrate how AIs insert themselves between people and relationsh­ips with other humans.

In his opening words to the Senate judiciary subcommitt­ee’s hearing, the chairman, Senator Blumenthal, said this: “Congress has a choice now. We had the same choice when we faced social media. We failed to seize that moment. The result is: predators on the internet; toxic content; exploiting children, creating dangers for them… Congress failed to meet the moment on social media. Now we have the obligation to do it on AI before the threats and the risks become real.”

Amen to that. The only thing wrong with the senator’s stirring introducti­on is the word “before”. The threats and the risks are already here. And we are about to find out if Marx’s view of history was the one to go for.

ChatGPT and its bedfellows are – among many other things – social media on steroids

What I’ve been reading

Capitalist punishment­Will AI Become the New McKinsey? is a perceptive essay in the New Yorker by Ted Chiang.

Founders keepersHen­ry Farrell has written a fabulous post called The Cult of the Founders on the Crooked Timber blog.

Superstore meThe Dead Silence of Goods is a lovely essay in the Paris Review by Adrienne Raphel about Annie Ernaux’s musings on the “superstore” phenomenon.

 ?? ?? OpenAI CEO Sam Altman at the US Senate hearing on AI last week. Photograph: Patrick Semansky/AP
OpenAI CEO Sam Altman at the US Senate hearing on AI last week. Photograph: Patrick Semansky/AP

Newspapers in English

Newspapers from United States