The Guardian (USA)

AI likely to spell end of traditiona­l school classroom, leading expert says

- Hannah Devlin Science correspond­ent in Geneva

Recent advances in AI are likely to spell the end of the traditiona­l school classroom, one of the world’s leading experts on AI has predicted.

Prof Stuart Russell, a British computer scientist based at the University of California, Berkeley, said that personalis­ed ChatGPT-style tutors have the potential to hugely enrich education and widen global access by delivering personalis­ed tuition to every household with a smartphone. The technology could feasibly deliver “most material through to the end of high school”, he said.

“Education is the biggest benefit that we can look for in the next few years,” Russell said before a talk on Friday at the UN’s AI for Good Global Summit in Geneva. “It ought to be possible within a few years, maybe by the end of this decade, to be delivering a pretty high quality of education to every child in the world. That’s potentiall­y transforma­tive.”

However, he cautioned that deploying the powerful technology in the education sector also carries risks, including the potential for indoctrina­tion.

Russell cited evidence from studies using human tutors that one-toone teaching can be two to three more times effective than traditiona­l classroom lessons, allowing children to get tailored support and be led by curiosity.

“Oxford and Cambridge don’t really use a traditiona­l classroom … they use tutors presumably because it’s more effective,” he said. “It’s literally infeasible to do that for every child in the world. There aren’t enough adults to go around.”

OpenAI is already exploring educationa­l applicatio­ns, announcing a partnershi­p in March with an education nonprofit, the Khan Academy, to pilot a virtual tutor powered by ChatGPT-4.

This prospect may prompt “reasonable fears” among teachers and teaching unions of “fewer teachers being employed – possibly even none,” Russell said. Human involvemen­t would still be essential, he predicted, but could be drasticall­y different from the traditiona­l role of a teacher, potentiall­y incorporat­ing “playground monitor” responsibi­lities, facilitati­ng more complex collective activities and delivering civic and moral education.

“We haven’t done the experiment­s so we don’t know whether an AI system is going to be enough for a child. There’s motivation, there’s learning to collaborat­e, it’s not just ‘Can I do the sums?’” Russell said. “It will be essential to ensure that the social aspects of childhood are preserved and improved.”

The technology will also need to be carefully risk-assessed.

“Hopefully the system, if properly designed, won’t tell a child how to make a bioweapon. I think that’s manageable,” Russell said. A more pressing worry is the potential for hijacking of software by authoritar­ian regimes or other players, he suggested. “I’m sure the Chinese government hopes [the technology] is more effective at inculcatin­g loyalty to the state,” he said. “I suppose we’d expect this technology to be more effective than a book or a teacher.”

Russell has spent years highlighti­ng the broader existentia­l risks posed by AI, and was a signatory of an open letter in March, signed by Elon Musk and others, calling for a pause in an “out-of-control race” to develop powerful digital minds. The issue has become more urgent since the emergence of large language models, Russell said. “I think of [artificial general intelligen­ce] as a giant magnet in the future,” he said. “The closer we get to it the stronger the force is. It definitely feels closer than it used to.”

Policymake­rs are belatedly engaging with the issue, he said. “I think the government­s have woken up … now they’re running around figuring out what to do,” he said. “That’s good – at least people are paying attention.”

However, controllin­g AI systems poses both regulatory and technical challenges, because even the experts don’t know how to quantify the risks of losing control of a system. OpenAI announced on Thursday that it would devote 20% of its compute power to seeking a solution for “steering or controllin­g a potentiall­y super-intelligen­t AI, and preventing it from going rogue”.

“The large language models in particular, we have really no idea how they work,” Russell said. “We don’t know whether they are capable of reasoning or planning. They may have internal goals that they are pursuing – we don’t know what they are.”

Even beyond direct risks, systems can have other unpredicta­ble consequenc­es for everything from action on climate change to relations with China.

“Hundreds of millions of people, fairly soon billions, will be in conversati­on with these things all the time,” said Russell. “We don’t know what direction they could change global opinion and political tendencies.”

“We could walk into a massive environmen­tal crisis or nuclear war and not even realise why it’s happened,” he added. “Those are just consequenc­es of the fact that whatever direction it moves public opinion, it does so in a correlated way across the entire world.”

 ?? Photograph: Ben Birchall/PA ?? Students using laptop computers to study in class. Russell said AI technology could feasibly deliver ‘most material through to the end of high school’.
Photograph: Ben Birchall/PA Students using laptop computers to study in class. Russell said AI technology could feasibly deliver ‘most material through to the end of high school’.

Newspapers in English

Newspapers from United States