Will AI start a war with humanity?
Probably not, but some experts in artificial intelligence are worried about the pace of change and want to call a halt to research. Can that be sensible? Simon Wilson reports
What’s happened?
The pace of developments in artificial intelligence (AI) shows no sign of letting up – sparking excitement but also concerns. Just since the autumn, says Juliet Samuel in The Times, the capability of the breakthrough AI tool ChatGPT has advanced substantially. The latest version, GPT-4, can pass an American bar exam with a result at the 90th percentile, versus its tenth percentile score in November, for example. In a “frenzy of innovation”, the “entire plumbing of the professional world is being reworked and entire businesses are being founded in which mass-market AI tools are integral to their success”. It’s an exciting field – for economic and financial reasons as well as technological ones. A report published last month by Goldman Sachs predicted that the widespread adoption of AI could significantly boost productivity and grow the world’s annual economic output by 7%. But to many credible observers, AI’s speed and direction of travel is also profoundly worrying. Google boss Sundar Pichai believes that AI could be “very harmful” if deployed wrongly, and is developing too fast. “So does that keep me up at night? Absolutely,” he said.
Who else is worried?
Large parts of the tech industry – including serious AI experts, well-steeped in the latest breakthroughs – are deeply concerned. Last month, in an open letter posted by the Future of Life Institute, more than 1,100 tech scientists and executives, including Elon Musk and many prominent AI researchers, called for a six-month moratorium on the “dangerous” development of cutting-edge systems more powerful than GPT-4. “Recent months have seen AI labs locked in an out-ofcontrol race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict or reliably control,” the letter argued. The world now needs a six-month pause, that “should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” the group added.
Why the panic?
The fear is that AI systems with humancompetitive intelligence pose profound risks to society and humanity, and the pace of change in recent months – including Microsoft-backed OpenAI’s ChatGPT in November and the March release of GPT-4, the sophisticated model that underpins the chatbot – has been unprecedented. The worry is that AI is advancing more rapidly than we realised, says Philip Johnston in
The Telegraph. “It has the capacity quite quickly to create a class of economically redundant people” and undermine the basis of modern life. “How we deal with this and “avert the descent into a real-life dystopia is the biggest challenge of our times.” The disruption will be akin to the industrial revolution, or the deindustrialisation of the later 20th century – but more serious, because it will be so all-encompassing in its effects on labour. It’s an issue governments are only now waking up to, but “they don’t really know what to do about it”.
What should they do?
One of the big debates in working out how to respond is the question of whether AI technology is approaching the historic turning-point of “artificial general intelligence” (AGI) – a computer system so powerful that it can generate new scientific knowledge and perform any task a human can do. Last month, Microsoft researchers given access to GPT-4 concluded that, given the breadth and depth of its capabilities, displaying close to human performance on a variety of novel and difficult tasks, the software “could reasonably be viewed as an early (yet still incomplete) version of an AGI system”. That view is controversial, but GPT-4 has certainly concentrated minds. In Time magazine, AI expert Eliezer Yudkowsky, research lead at the Machine Intelligence Research Institute, warns that the most likely result of building “superhumanly smart AI” is that “literally everyone on Earth will die”. He argued governments should strictly control and monitor the use of advanced computer chips used to construct AI systems, and even consider air strikes against rogue research centres.
What about less extreme options?
In the Financial Times, AI expert Ian Hogarth makes a case for slowing down what he calls “God-like AI” – defining AGI as “a superintelligent computer that learns and develops autonomously, that understands its environment without the need for supervision and that can transform the world around it.” For less sophisticated, “narrowly useful” AI systems, Hogarth calls for a regulatory regime similar to the pre-market approval process for pharmaceuticals. And he calls for legislators to question the leaders of AI labs under oath about safety risks. For wider-ranging applications, Hogarth calls for removing the profit motive to nullify the allegedly dangerous dynamics of the private-sector race. His model is the Cern particle physics laboratory, and he calls for a similar international agency to research AGI.
Is a six-month pause practical?
“The most likely result of building smart AI is that literally everyone will die”
Clearly not – and nor is one desirable, says Rohan Silva in The Times. In the words of the Google co-founder Larry Page, “good ideas are always crazy until they’re not” – and it’s the same with AI, which is now beginning to drive progress that humans couldn’t achieve alone. For example, the London-based team at DeepMind recently used AI to crack the proteinfolding problem, which had flummoxed researchers for decades and which “may be a springboard for developing important new drugs”. And AI software has the potential to revolutionise healthcare systems, freeing up capacity and improving care. The challenge with AI will be in not being panicked into over-regulation. “With public services at breaking point and government finances stretched perilously thin, can we really afford to suppress the gains that AI might bring?”
⬤ The surge in “big bling” is looking “decidedly C-shaped”, says Andrea Felsted on Bloomberg.
LVMH reported blowout earnings, driven by “revenge spending” in
China, as the economy reopened.
Hermès also beat expectations. But Kering managed just 1% growth in like-for-like sales in the first quarter, below consensus estimates of 2.9%, including sluggish growth at its key Gucci brand. The group wasn’t completely left behind – it still saw double-digit, year-on-year growth in China across its brands. Yet there are signs that Chinese consumers are growing bored with the “bold maximalism” and “logoheavy look” that drove Gucci’s turnaround six years ago. Kering is trying to address this, with a new designer, a focus on taking the brand upmarket and investment in improving its stores. But “it will take time for Gucci to regain its traction in China”.
⬤ Fund manager Schroders is “reach[ing] for the scissors” on its stake in Revolut, says Nils
Pratley in The Guardian. Schroders’ holding – via the Capital Global Innovation Trust – is “comparatively modest”, but its new valuation implies that Revolut is worth $18bn, far below the $33bn that the fintech firm was valued at in its last financing round in 2021. Putting a price on unlisted stocks is tricky and some may feel the reduction is excessive, but more likely it doesn’t go far enough.“Isn’t $18bn still a bit punchy in today’s colder climate in tech-land?” Revolut is valued at more than the “solid and successful“insurer Legal & General (£15bn), yet it made a profit of just £26m in 2021, compared to L&G’s £2.3bn. The bull case is that it is “young, growing and increasingly global”, but “perspective is still needed”.
⬤ ”Big pharma is often a target for litigants,” says Alex Brummer in the Daily Mail. Take AstraZeneca, which faces lawsuits over injuries allegedly caused by its Covid vaccine. Shareholder advisory service Pirc is recommending that investors vote against the re-election of CEO Pascal Soirot on those grounds. Yet “the same could be said for every life sciences boss”. Under Soirot, Astra has become a world leader in immunology and is now the largest firm in the FTSE 100 – more valuable than Pfizer, which tried to buy it in 2014. “Pirc should be ignored.”