The Guardian Australia

Tech firms say laws to protect us from bad AI will limit ‘innovation’. Well, good

- John Naughton

Way back in May 2014, the European court of justice issued a landmark ruling that European citizens had the right to petition search engines to remove search results that linked to material that had been posted lawfully on third-party websites. This was popularly but misleading­ly described as the “right to be forgotten”; it was really a right to have certain published material about the complainan­t delisted by search engines, of which Google was by far the most dominant. Or, to put it crudely, a right not to be found by Google.

On the morning the ruling was released, I had a phone call from a relatively senior Google employee whom I happened to know. It was clear from his call that the company had been ambushed by the ruling – its expensive legal team had plainly not expected it. But it was also clear that his US bosses were incensed by the effrontery of a mere European institutio­n in issuing such a verdict. And when I mildly indicated that I regarded it as a reasonable judgment, I was treated to an energetic tirade, the gist of which was that the trouble with Europeans is that they’re “hostile to innovation”. At which point the conversati­on ended and I never heard from him again.

What brings this to mind is the tech companies’ reaction to a draft EU bill published last month that, when it becomes law in about two years’ time, will make it possible for people who have been harmed by software to sue the companies that produce and deploy it. The new bill, called the AI Liability Directive, will complement the EU’s AI Act, which is set to become EU law around the same time. The aim of these laws is to prevent tech companies from releasing dangerous systems, for example: algorithms that boost misinforma­tion and target children with harmful content; facial recognitio­n systems that are often discrimina­tory; predictive AI systems used to approve or reject loans or to guide local policing strategies and so on that are less accurate for minorities. In other words, technologi­es that are currently almost entirely unregulate­d.

The AI Act mandates extra checks

for “high-risk” uses of AI that have the most potential to harm people, particular­ly in areas such as policing, recruitmen­t and healthcare. The new liability bill, says MIT’s Technology Review journal, “would give people and companies the right to sue for damages after being harmed by an AI system. The goal is to hold developers, producers and users of the technologi­es accountabl­e and require them to explain how their AI systems were built and trained. Tech companies that fail to follow the rules risk EU-wide class actions.”

Right on cue, up pops the Computer & Communicat­ions Industry Associatio­n (CCIA), the lobbying outfit that represents tech companies in Brussels. Its letter to the two European commission­ers responsibl­e for the two acts immediatel­y raises the concern that imposing strict liability on tech firms “would be disproport­ionate and ill-suited to the properties of software”. And, of course, it could have “a chilling effect” on “innovation”.

Ah yes. That would be the same innovation that led to the Cambridge Analytica scandal and Russian online meddling in 2016’s US presidenti­al election and UK Brexit referendum and enabled the livestream­ing of mass shootings. The same innovation behind the recommenda­tion engines that radicalise­d extremists and directed “10 depression pins you might like” to a troubled teenager who subsequent­ly ended her own life.

It’s difficult to decide which of the two assertions made by the CCIA – that strict liability is “ill suited” to software or that “innovation” is the defining characteri­stic of the industry – is the more prepostero­us. For more than 50 years, the tech industry has been granted a latitude extended to no other industry, namely avoidance of legal liability for the innumerabl­e deficienci­es and vulnerabil­ities of its main product or the harm that those flaws cause.

What is even more remarkable, though, is how the tech companies’ claim to be the sole masters of “innovation” has been taken at its face value for so long. But now two eminent competitio­n lawyers, Ariel Ezrachi and Maurice Stucke, have called the companies’ bluff. In a remarkable new book, How Big-Tech Barons Smash Innovation – And How to Strike Back, they explain how the only kinds of innovation tech companies tolerate is that which aligns with their own interests. They reveal how tech firms are ruthless in stifling disruptive or threatenin­g innovation­s, either by pre-emptive acquisitio­n or naked copycattin­g, and that their dominance of search engines and social media platforms restricts the visibility of promising innovation­s that might be competitiv­ely or societally useful. As an antidote to tech puffery, the book will be hard to beat. It should be required reading for everyone at Ofcom, the Competitio­n and Markets Authority and the DCMS. And from now on “innovation for whom?” should be the first question to any tech booster lecturing you about innovation. What I’ve been reading

The web of timeThe Thorny Problem of Keeping the Internet’s Time is a fascinatin­g New Yorker essay by Nate Hopper on the genius who, many years ago, created the arcane software system that synchronis­es the network’s clocks.

Trussed upProject Fear 3.0 is a fine blogpost by Adam Tooze on criticism of the current Tory administra­tion.Tech’s progressAs­cension is a thoughtful essay by Drew Austin on how our relationsh­ip to digital technology has changed in the period 2019-2022.

For more than 50 years, the tech industry has been granted a latitude extended to no other industry

 ?? Photograph: Future Publishing/Getty Images ?? A protest against facial recognitio­n software in central London. Police use of live facial recognitio­n is one of the ‘high-risk’ uses targeted by the EU’s AI Act.
Photograph: Future Publishing/Getty Images A protest against facial recognitio­n software in central London. Police use of live facial recognitio­n is one of the ‘high-risk’ uses targeted by the EU’s AI Act.

Newspapers in English

Newspapers from Australia