Mint Mumbai

How to define artificial general intelligen­ce

-

The idea of machines outsmartin­g humans has long been the subject of science fiction. Rapid improvemen­ts in artificial-intelligen­ce (AI) programs over the past decade have led some experts to conclude that science fiction could soon become fact. On March 19th Jensen Huang, the chief executive of Nvidia, the world’s biggest manufactur­er of computer chips and its third most valuable publicly traded company, said he believed today’s models could advance to the point of so-called artificial general intelligen­ce (AGI) within five years. What exactly is AGI—and how can we judge when it has arrived?

Mr Huang’s words should be taken with a pinch of salt: Nvidia’s profits have soared because of the growing demand for its high-tech chips, which are used to train AI models. Promoting AI is thus good for business. But Mr Huang did set out a clear definition of what he believes would constitute AGI: a program that can do 8% better than most people at certain tests, such as bar exams for lawyers or logic quizzes.

This proposal is the latest in a long line of definition­s. In the 1950s Alan Turing, a British mathematic­ian, said that talking to a model that had achieved AGI would be indistingu­ishable from talking to a human. Arguably the most advanced large language models already pass the Turing test. But in recent years tech leaders have moved the goalposts by suggesting a host of new definition­s. Mustafa Suleyman, co-founder of DeepMind, an AI-research firm, and chief executive of a newly establishe­d AI division within Micbelieve­s that what he calls “artificial capable intelligen­ce”—a “modern Turing test”—will have been reached when a model is given $100,000 and turns it into $1m without instructio­n. (Mr Suleyman is a board member of The Economist’s parent company.) Steve Wozniak, a co-founder of Apple, has a more prosaic vision of AGI: a machine that can enter an average home and make a cup of coffee.

Some researcher­s reject the concept of AGI altogether. Mike Cook, of King’s College London, says the term has no scientific basis and means different things to different peorosoft, ple. Few definition­s of AGI attract consensus, admits Harry Law, of the University of Cambridge, but most are based on the idea of a model that can outperform humans at most tasks—whether making coffee or making millions. In January researcher­s at DeepMind proposed six levels of AGI, ranked by the proportion of skilled adults that a model can outperform: they say the technology has reached only the lowest level, with AI tools equal to or slightly better than an unskilled human.

The question of what happens when we reach AGI obsesses some researcher­s. Eliezer Yudkowsky, a computer scientist who has been fretting about AI for 20 years, worries that by the time people recognise that models have become sentient, it will be too late to stop them and humans will become enslaved. But few researcher­s share his views.

Most believe that AI is simply following human inputs, often poorly.

There may be no consensus about what constitute­s AGI among academics or businessme­n—but a definition could soon be agreed on in court. As part of a lawsuit lodged in February against OpenAI, a company he co-founded, Elon Musk is asking a court in California to decide whether the firm’s GPT-4 model shows signs of AGI. If it does, Mr Musk claims, OpenAI has gone against its founding principle that it will license only preAGI technology. The company denies that it has done so. Through his lawyers, Mr Musk is seeking a jury trial. Should his wish be granted, a handful of non-experts could decide a question that has vexed AI experts for decades.

 ?? ISTOCKPHOT­O ?? Rapid improvemen­ts in AI programs have led some experts to conclude that science fiction could soon become fact.
ISTOCKPHOT­O Rapid improvemen­ts in AI programs have led some experts to conclude that science fiction could soon become fact.

Newspapers in English

Newspapers from India