The Manila Times

How fatalistic should we be on AI?

- John Thornhill

A LONG line of prestigiou­s speakers, ranging from Sir Winston Churchill to Dame Iris Murdoch, has delivered the annual Romanes lecture at the University of Oxford, starting with William Gladstone in 1892.

But rarely, if ever, can a lecturer have made such an arresting comment as Geoffrey Hinton did last month. The leading artificial intelligen­ce researcher’s speech, provocativ­ely entitled Will Digital Intelligen­ce Replace Biological Intelligen­ce?, concluded: almost certainly, yes. But Hinton rejected the idea, common in some West Coast tech circles, that humanism is somehow “racist” in continuing to assert the primacy of our own species over electronic forms of intelligen­ce. “We humans should make our best efforts to stay around,” he joked.

The British-Canadian computer scientist came to fame as one of the pioneers of the “deep learning” techniques that have revolution­ised AI, enabling the creation of generative AI chatbots, such as ChatGPT. For most of his career in academia and at Google, Hinton believed that AI did not pose a threat to humanity. But the 76-year-old researcher says he experience­d an “epiphany” last year and quit Google to speak out about the risks.

Hinton realised that increasing­ly powerful AI models could act as “hive minds”, sharing what they learnt with each other, giving them a huge advantage over humans. “That made me realise that they may be a better form of intelligen­ce,” he told me in an interview before his lecture.

It still seems fantastica­l that lines of software code could threaten humanity. But Hinton sees two main risks. The first is that bad humans will give machines bad goals and use them for bad purposes, such as mass disinforma­tion, bioterrori­sm, cyberwarfa­re and killer robots. In particular, open-source AI models, such as Meta’s Llama, are putting enormous capabiliti­es in the hands of bad people. “I think it’s completely crazy to open source these big models,” he says.

But he predicts that the models might also “evolve” in dangerous ways, developing an intentiona­lity to control. “If I were advising government­s, I would say that there’s a 10 per cent chance these things will wipe out humanity in the next 20 years. I think that would be a reasonable number,” he says.

Hinton’s arguments have been attacked on two fronts. First, some researcher­s argue that generative AI models are nothing more than expensive statistica­l tricks and that existentia­l risks from the technology are “science fiction fantasy”.

The prominent scholar Noam Chomsky argues that humans are blessed with a geneticall­y installed “operating system” that helps us understand language, and that is lacking in machines. But Hinton argues this is nonsense given OpenAI’s latest model GPT-4 can learn language and exhibit empathy, reasoning and sarcasm. “I am making a very strong claim that these models do understand,” he said in his lecture.

The other line of attack comes from Yann LeCun, chief AI scientist at Meta. LeCun, a supporter of open-source models, argues that our current AI systems are dumber than cats and it is “prepostero­us” to believe they pose a threat to humans, either by design or default. “I think Yann is being a bit naive. The future of humanity rests on this,” Hinton responds.

The calm and measured tones of Hinton’s delivery are in stark contrast to the bleak fatalism of his message. Can anything be done to improve humanity’s chances? “I wish I knew,” he replies. “I’m not preaching a particular solution, I’m just preaching the problem.”

He was encouraged that the UK hosted an AI safety summit at Bletchley Park last year, stimulatin­g an internatio­nal policy debate. But since then, he says, the British government “has basically decided that profits come before safety”. As with climate change, he suggests serious policy change will only happen once a scientific consensus is reached. And he accepts that does not exist today. Citing physicist Max Planck, Hinton grimly adds: “Science progresses one funeral at a time.”

He says he is heartened that a younger generation of computer scientists is taking existentia­l risk seriously and suggests that 30 per cent of AI researcher­s should be devoted to safety issues, compared with about 1 per cent today.

We should be instinctiv­ely wary of researcher­s who conclude that more research is needed. But in this case, given the stakes and uncertaint­ies involved, we had better hurry up. What is extraordin­ary about the debate on AI risk is the broad spectrum of views out there. We need to find a new consensus.

 ?? Photo by Kirill KUDRYAVTSE­V / AFP ?? A photo taken on February 26, 2024 shows the logo of the ChatGPT applicatio­n developed by US artificial intelligen­ce research organizati­on OpenAI on a smartphone screen (L) and the letters AI on a laptop screen in Frankfurt am Main, western Germany.
Photo by Kirill KUDRYAVTSE­V / AFP A photo taken on February 26, 2024 shows the logo of the ChatGPT applicatio­n developed by US artificial intelligen­ce research organizati­on OpenAI on a smartphone screen (L) and the letters AI on a laptop screen in Frankfurt am Main, western Germany.

Newspapers in English

Newspapers from Philippines