The Day

‘Godfather of AI’ warns of tech ills on leaving Google

- By MATT O’BRIEN and WYATTE GRANTHAM-PHILIPS AP Technology Reporter Matt O’Brien reported from Cambridge, Mass.

Washington — Sounding alarms about artificial intelligen­ce has become a popular pastime in the ChatGPT era, taken up by high-profile figures as varied as industrial­ist Elon Musk, leftist intellectu­al Noam Chomsky and the 99-year-old retired statesman Henry Kissinger.

But it's the concerns of insiders in the AI research community that are attracting particular attention. A pioneering researcher and the so-called “Godfather of AI” Geoffrey Hinton quit his role at Google so he could more freely speak about the dangers of the technology he helped create.

Over his decades-long career, Hinton's pioneering work on deep learning and neural networks helped lay the foundation for much of the AI technology we see today.

There has been a spasm of AI introducti­ons in recent months. San Francisco-based startup OpenAI, the Microsoft-backed company behind ChatGPT, rolled out its latest artificial intelligen­ce model, GPT-4, in March. Other tech giants have invested in competing tools — including Google's “Bard.”

Some of the dangers of AI chatbots are “quite scary,” Hinton told the BBC. “Right now, they're not more intelligen­t than us, as far as I can tell. But I think they soon may be.”

In an interview with MIT Technology Review, Hinton also pointed to “bad actors” that may use AI in ways that could have detrimenta­l impacts on society — such as manipulati­ng elections or instigatin­g violence.

Hinton, 75, says he retired from Google so that he could speak openly about the potential risks as someone who no longer works for the tech giant.

“I want to talk about AI safety issues without having to worry about how it interacts with Google's business,” he told MIT Technology Review. “As long as I'm paid by Google, I can't do that.”

Since announcing his departure, Hinton has maintained that Google has “acted very responsibl­y” regarding AI. He told MIT Technology Review that there's also “a lot of good things about Google” that he would want to talk about — but those comments would be “much more credible if I'm not at Google anymore.”

Google confirmed that Hinton had retired from his role after 10 years overseeing the Google Research team in Toronto.

Hinton declined further comment Tuesday but said he would talk more about it at a conference Wednesday.

At the heart of the debate on the state of AI is whether the primary dangers are in the future or present. On one side are hypothetic­al scenarios of existentia­l risk caused by computers that supersede human intelligen­ce. On the other are concerns about automated technology that's already getting widely deployed by businesses and government­s and can cause real-world harms.

“For good or for not, what the chatbot moment has done is made AI a national conversati­on and an internatio­nal conversati­on that doesn't only include AI experts and developers,” said Alondra Nelson, who until February led the White House Office of Science and Technology Policy and its push to craft guidelines around the responsibl­e use of AI tools.

“AI is no longer abstract, and we have this kind of opening, I think, to have a new conversati­on about what we want a democratic future and a non-exploitati­ve future with technology to look like,” Nelson said in an interview last month.

A number of AI researcher­s have long expressed concerns about racial, gender and other forms of bias in AI systems, including text-based large language models that are trained on huge troves of human writing and can amplify discrimina­tion that exists in society.

“We need to take a step back and really think about whose needs are being put front and center in the discussion about risks,” said Sarah Myers West, managing director of the nonprofit AI Now Institute. “The harms that are being enacted by AI systems today are really not evenly distribute­d. It's very much exacerbati­ng existing patterns of inequality.”

Hinton was one of three AI pioneers who in 2019 won the Turing Award, an honor that has become known as tech industry's version of the Nobel Prize. The other two winners, Yoshua Bengio and Yann LeCun, have also expressed concerns about the future of AI.

Bengio, a professor at the University of Montreal, signed a petition in late March calling for tech companies to agree to a six-month pause on developing powerful AI systems, while LeCun, a top AI scientist at Facebook parent Meta, has taken a more optimistic approach.

 ?? ?? Computer scientist Geoffrey Hinton in a 2015 file photo by AP.
Computer scientist Geoffrey Hinton in a 2015 file photo by AP.

Newspapers in English

Newspapers from United States