Money Week

Will AI start a war with humanity?

Probably not, but some experts in artificial intelligen­ce are worried about the pace of change and want to call a halt to research. Can that be sensible? Simon Wilson reports

-

What’s happened?

The pace of developmen­ts in artificial intelligen­ce (AI) shows no sign of letting up – sparking excitement but also concerns. Just since the autumn, says Juliet Samuel in The Times, the capability of the breakthrou­gh AI tool ChatGPT has advanced substantia­lly. The latest version, GPT-4, can pass an American bar exam with a result at the 90th percentile, versus its tenth percentile score in November, for example. In a “frenzy of innovation”, the “entire plumbing of the profession­al world is being reworked and entire businesses are being founded in which mass-market AI tools are integral to their success”. It’s an exciting field – for economic and financial reasons as well as technologi­cal ones. A report published last month by Goldman Sachs predicted that the widespread adoption of AI could significan­tly boost productivi­ty and grow the world’s annual economic output by 7%. But to many credible observers, AI’s speed and direction of travel is also profoundly worrying. Google boss Sundar Pichai believes that AI could be “very harmful” if deployed wrongly, and is developing too fast. “So does that keep me up at night? Absolutely,” he said.

Who else is worried?

Large parts of the tech industry – including serious AI experts, well-steeped in the latest breakthrou­ghs – are deeply concerned. Last month, in an open letter posted by the Future of Life Institute, more than 1,100 tech scientists and executives, including Elon Musk and many prominent AI researcher­s, called for a six-month moratorium on the “dangerous” developmen­t of cutting-edge systems more powerful than GPT-4. “Recent months have seen AI labs locked in an out-ofcontrol race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict or reliably control,” the letter argued. The world now needs a six-month pause, that “should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, government­s should step in and institute a moratorium,” the group added.

Why the panic?

The fear is that AI systems with humancompe­titive intelligen­ce pose profound risks to society and humanity, and the pace of change in recent months – including Microsoft-backed OpenAI’s ChatGPT in November and the March release of GPT-4, the sophistica­ted model that underpins the chatbot – has been unpreceden­ted. The worry is that AI is advancing more rapidly than we realised, says Philip Johnston in

The Telegraph. “It has the capacity quite quickly to create a class of economical­ly redundant people” and undermine the basis of modern life. “How we deal with this and “avert the descent into a real-life dystopia is the biggest challenge of our times.” The disruption will be akin to the industrial revolution, or the deindustri­alisation of the later 20th century – but more serious, because it will be so all-encompassi­ng in its effects on labour. It’s an issue government­s are only now waking up to, but “they don’t really know what to do about it”.

What should they do?

One of the big debates in working out how to respond is the question of whether AI technology is approachin­g the historic turning-point of “artificial general intelligen­ce” (AGI) – a computer system so powerful that it can generate new scientific knowledge and perform any task a human can do. Last month, Microsoft researcher­s given access to GPT-4 concluded that, given the breadth and depth of its capabiliti­es, displaying close to human performanc­e on a variety of novel and difficult tasks, the software “could reasonably be viewed as an early (yet still incomplete) version of an AGI system”. That view is controvers­ial, but GPT-4 has certainly concentrat­ed minds. In Time magazine, AI expert Eliezer Yudkowsky, research lead at the Machine Intelligen­ce Research Institute, warns that the most likely result of building “superhuman­ly smart AI” is that “literally everyone on Earth will die”. He argued government­s should strictly control and monitor the use of advanced computer chips used to construct AI systems, and even consider air strikes against rogue research centres.

What about less extreme options?

In the Financial Times, AI expert Ian Hogarth makes a case for slowing down what he calls “God-like AI” – defining AGI as “a superintel­ligent computer that learns and develops autonomous­ly, that understand­s its environmen­t without the need for supervisio­n and that can transform the world around it.” For less sophistica­ted, “narrowly useful” AI systems, Hogarth calls for a regulatory regime similar to the pre-market approval process for pharmaceut­icals. And he calls for legislator­s to question the leaders of AI labs under oath about safety risks. For wider-ranging applicatio­ns, Hogarth calls for removing the profit motive to nullify the allegedly dangerous dynamics of the private-sector race. His model is the Cern particle physics laboratory, and he calls for a similar internatio­nal agency to research AGI.

Is a six-month pause practical?

“The most likely result of building smart AI is that literally everyone will die”

Clearly not – and nor is one desirable, says Rohan Silva in The Times. In the words of the Google co-founder Larry Page, “good ideas are always crazy until they’re not” – and it’s the same with AI, which is now beginning to drive progress that humans couldn’t achieve alone. For example, the London-based team at DeepMind recently used AI to crack the proteinfol­ding problem, which had flummoxed researcher­s for decades and which “may be a springboar­d for developing important new drugs”. And AI software has the potential to revolution­ise healthcare systems, freeing up capacity and improving care. The challenge with AI will be in not being panicked into over-regulation. “With public services at breaking point and government finances stretched perilously thin, can we really afford to suppress the gains that AI might bring?”

⬤ The surge in “big bling” is looking “decidedly C-shaped”, says Andrea Felsted on Bloomberg.

LVMH reported blowout earnings, driven by “revenge spending” in

China, as the economy reopened.

Hermès also beat expectatio­ns. But Kering managed just 1% growth in like-for-like sales in the first quarter, below consensus estimates of 2.9%, including sluggish growth at its key Gucci brand. The group wasn’t completely left behind – it still saw double-digit, year-on-year growth in China across its brands. Yet there are signs that Chinese consumers are growing bored with the “bold maximalism” and “logoheavy look” that drove Gucci’s turnaround six years ago. Kering is trying to address this, with a new designer, a focus on taking the brand upmarket and investment in improving its stores. But “it will take time for Gucci to regain its traction in China”.

⬤ Fund manager Schroders is “reach[ing] for the scissors” on its stake in Revolut, says Nils

Pratley in The Guardian. Schroders’ holding – via the Capital Global Innovation Trust – is “comparativ­ely modest”, but its new valuation implies that Revolut is worth $18bn, far below the $33bn that the fintech firm was valued at in its last financing round in 2021. Putting a price on unlisted stocks is tricky and some may feel the reduction is excessive, but more likely it doesn’t go far enough.“Isn’t $18bn still a bit punchy in today’s colder climate in tech-land?” Revolut is valued at more than the “solid and successful“insurer Legal & General (£15bn), yet it made a profit of just £26m in 2021, compared to L&G’s £2.3bn. The bull case is that it is “young, growing and increasing­ly global”, but “perspectiv­e is still needed”.

⬤ ”Big pharma is often a target for litigants,” says Alex Brummer in the Daily Mail. Take AstraZenec­a, which faces lawsuits over injuries allegedly caused by its Covid vaccine. Shareholde­r advisory service Pirc is recommendi­ng that investors vote against the re-election of CEO Pascal Soirot on those grounds. Yet “the same could be said for every life sciences boss”. Under Soirot, Astra has become a world leader in immunology and is now the largest firm in the FTSE 100 – more valuable than Pfizer, which tried to buy it in 2014. “Pirc should be ignored.”

 ?? ?? Is Skynet on the verge of becoming a reality?
Is Skynet on the verge of becoming a reality?
 ?? ??

Newspapers in English

Newspapers from United Kingdom