The Daily Telegraph

The next wave of AI systems threatens to upend economy

New models that have the ability to switch between tasks could put jobs at risk, writes James Titcomb

-

Staff at Openai did not expect much on Nov 30 last year when the company unveiled a “low-key research preview” called CHATGPT. Greg Brockman, Openai’s president, told workers that it wouldn’t have much of an impact on day-to-day business, confidentl­y forecastin­g that it would only be noticed in a few nerdy corners of Twitter.

It quickly became obvious that this was a wild underestim­ate. Millions of users signed up within days and CHATGPT was heralded as the most important technology in a decade, leading to a worldwide fervour about artificial intelligen­ce (AI).

Employees could be forgiven for failing to predict its popularity, though. CHATGPT, with its ability to conjure up essays and arguments, may have astonished its early users, but to its developers it was positively medieval.

The underlying AI system it was based on, known as GPT-3.5, was almost a year old. The company had already developed its supercharg­ed successor, GPT-4, and was preparing to release it to the public. Openai described it as being 10 times more advanced, saying it could understand not only text but images; and could pass legal exams.

Now, just over a year later, the company is taking its first steps toward a vastly more powerful system. Techies gossip about GPT-5 with the awe that was once reserved for a new iphone. The release of GPT-5 is expected to be the AI event of 2024.

Developing software is typically a case of tweaking previous versions to eke out small improvemen­ts.

Creating new AI systems is often a case of starting again. An unpreceden­tedly vast amount of data is thrown at an unpreceden­tedly powerful system of next-generation microchips, resulting in a model several times more powerful. GPT-1, the primordial model created in 2018, was trained on 117m data points known as parameters. GPT-3 required more than one thousand times that, at 175bn, and GPT-4 was another tenfold increase, at 1.7 trillion.

The computing requiremen­ts have soared too. GPT-4 reportedly required 16,000 high-end Nvidia A100 chips, against 1,024 for the previous generation. Little is known about the next wave of models, but they are certain to be trained on Nvidia’s new H100 chips, a vastly more powerful successor that is the first to be designed for training AI models.

“The step up from GPT-3 to GPT-4 was so dramatic that you would be a fool not to try it again,” says Oren Etzioni, the former chief executive of the Allen Institute for AI.

Google, which unveiled its new model, Gemini, this month, is preparing to release the more powerful Gemini Ultra in the new year. Anthropic, the Amazon-backed AI lab, may also launch a new system.

Scientists are divided, though, on exactly what more power will mean. Today’s large language models are approachin­g the upper limits on certain tasks. Google’s Gemini already outperform­s humans on a widely used language comprehens­ion test and on computer programmin­g exams.

Nathan Benaich, the founder of investment firm Air Street Capital and the co-author of the annual State of AI report, says the next generation of systems will be “multi-modal” – capable of understand­ing not only text but images, videos and audio. That, he says, will bring them closer to understand­ing the world.

Demis Hassabis, the head of Google’s Deepmind lab, has said this could come to include sensations such as touch.

Matt Clifford, a tech entreprene­ur who led the Government’s work on last month’s AI Safety Summit, says that the next wave of models could display capabiliti­es akin to reasoning and planning – qualities that we might associate with human intelligen­ce.

AI that can switch from one task to another would be a step towards autonomous “agents” – systems that can carry out tasks on people’s behalf, such as booking a holiday or reading and answering emails.

The consequenc­es of that could be profound. While today’s AI systems have threatened to take jobs in areas such as copy writing and design, they must typically be chaperoned through the writing or illustrati­ng process. Those that can turn their words into action – a customer service bot that can book flights, for example – would be more threatenin­g.

The next wave of models will, however, face increasing government scrutiny. Nine companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, Mistral, Openai and Elon Musk’s x.ai – have agreed to have their systems tested by the UK Government’s AI Safety Institute before they are released.

Equally, the next wave of AI systems could prove to be a bust. Sceptics believe that most of the easy wins have already happened.

But if the capabiliti­es of next year’s models remain unknown for now, it seems certain that existing AI technologi­es will become more widely used. One flashpoint is likely to be elections, as 2024 will see more than 2bn people go to the polls in countries including the US, India and Britain.

“We’re going to see deep fakes and their impact placed under the microscope,” says Benaich.

In 2023, AI may have captured the popular imaginatio­n, but it might not be until 2024 that its impact really starts to be felt.

‘We’re going to see deep fakes and their impact placed under the microscope’

 ?? ?? Sundar Pichai, Google’s chief executive, discusses artificial intelligen­ce. The company unveiled its new Gemini system this month
Sundar Pichai, Google’s chief executive, discusses artificial intelligen­ce. The company unveiled its new Gemini system this month

Newspapers in English

Newspapers from United Kingdom