The Guardian (USA)

Thank the Lords someone is worried about AI-controlled weapons systems

- John Naughton

The most interestin­g TV I’ve watched recently did not come from a convention­al television channel, nor even from Netflix, but from TV coverage of parliament. It was a recording of a meeting of the AI in weapons systems select committee of the House of Lords, which was set up to inquire into “how should autonomous weapons be developed, used and regulated”. The particular session I was interested in was the one held on 20 April, during which the committee heard from four expert witnesses – Kenneth Payne, who is professor of strategy at King’s College London; Keith Dear, director of artificial intelligen­ce innovation at the computer company Fujitsu; James Black from the defence and security research group of Rand Europe; and Courtney Bowman, global director of privacy and civil liberties engineerin­g at Palantir UK. An interestin­g mix, I thought – and so it turned out to be.

Autonomous weapons systems are ones that can select and attack a target without human interventi­on. It is believed (and not just by their boosters) that these systems could revolution­ise warfare, and may be faster, more accurate and more resilient than existing weapons systems. And that they could, conceivabl­y, even limit the casualties of war (though I’ll believe that when I see it).

The most striking thing about the session (for this columnist, anyway) was that, although it was ostensibly about the military uses of artificial intelligen­ce in warfare, many of the issues and questions that arose in the two hours of discussion could equally have arisen in discussion­s about civilian deployment of the technology. Questions about safety and reliabilit­y, for example, or governance and control. And, of course, about regulation.

Many of the most interestin­g exchanges were about this last topic. “We just have to accept,” said Lord Browne of Ladyton resignedly at one point, “that we will never get in front of this technology. We’re always going to be trying to catch up. And if our consistent experience of public policy developmen­t sustains – and it will – then the technology will go at the speed of light and we will go at the speed of a tortoise. And that’s the world that we’re living in.”

This upset the professor on the panel. “Instinctiv­ely, I’m reluctant to say that’s the case,” quoth he. “I’m loth to agree with an argument that an academic would sum up as technologi­cal determinis­m – ignoring all kinds of institutio­nal and cultural factors that go into shaping how individual societies develop their AI, but it’s certainly going to be challengin­g and I don’t think the existing institutio­nal arrangemen­ts are adequate for those sorts of discussion­s to take place.”

Note the term “challengin­g”. It is also ubiquitous in civilian discussion­s about governance/regulation of AI, where it is a euphemism for “impossible”.

So, replied Browne, we should bring the technology “in house” (ie, under government control)?

At which point the guy from Fujitsu remarked laconicall­y that “nothing would slow down AI progress faster than bringing it into government”. Cue laughter.

Then there was the question of proliferat­ion, a perennial problem in arms control. How does the ubiquity of AI change that? Greatly, said the guy from Rand. “A lot of stuff is very much going to be difficult to control from a non-proliferat­ion perspectiv­e, due to its inherent software-based nature. A lot of our export controls and non-proliferat­ion regimes that exist are very much focused on old-school traditiona­l hardware: it’s missiles, it’s engines, it’s nuclear materials.”

Yep. And it’s also consumer drones that you buy from Amazon and rejig for military purposes, such as dropping grenades on Russian soldiers in trenches in Ukraine.

Overall, it was an illuminati­ng session, a paradigmat­ic example of what deliberati­ve democracy should be like: polite, measured, informed, respectful. And it prompted reflection­s about the fact that the best and most though

tful discussion­s of difficult issues that take place in this benighted kingdom happen not in its elected chamber, but in the constituti­onal anomaly that is the House of Lords.

I first realised this during Tony Blair’s first term, when some of us were trying to get MPs to pay attention to the Regulation of Investigat­ory Powers Act, then being shepherded through parliament by the home secretary, Jack Straw, and his underling Charles Clarke. We discovered then that, of the 650 members of the House of Commons, only a handful displayed any interest at all in that flawed statute. (Most of them had accepted the Home Office bromide that it was just bringing telephone tapping into the digital age.) I was astonished to find the only legislator­s who managed to improve the bill on its way to the statute book were a small group of those dedicated constituti­onal anomalies in the Lords who put in a lot of time and effort trying to make it less defective than it would otherwise have been. It was a thankless task, and it was inspiring to see them do it. And it’s why I enjoyed watching them doing it again 10 days ago.

‘Challengin­g’ is ubiquitous in civilian discussion­s about the regulation of AI – it’s a euphemism for ‘impossible’

What I’ve been reading

Democratic deficitA blistering post by Scott Galloway on his No Mercy/ No Malice blog, Guardrails, outlines the catastroph­ic failure of democratic states to regulate tech companies.

Hit those keysBarry Sanders has produced a lovely essay in Cabinet magazine on the machine that mechanised writing.

All chatted outI’m ChatGPT, and for the Love of God, Please Don’t Make Me Do Any More Copywritin­g is a nice spoof by Joe Wellman on McSweeney’s Internet Tendency.

 ?? ?? A Ukrainian soldier in Donetsk carrying a drone. Photograph: Libkos/AP
A Ukrainian soldier in Donetsk carrying a drone. Photograph: Libkos/AP

Newspapers in English

Newspapers from United States