The Guardian (USA)

John Oliver on new AI programs: ‘The potential and the peril here are huge’

- Adrian Horton

John Oliver returned to Last Week Tonight to discuss the red-hot topic of artificial intelligen­ce, also known as AI. “If it seems like everyone is suddenly talking about AI, that is because they are,” he started, thanks to the emergence of several programs such as the text generator ChatGPT, which had 100 million active users in January, making it the fastest-growing consumer applicatio­n in history.

Microsoft has invested $10bn into OpenAI, the company behind ChatGPT, and launched an AI-powered Bing home page; Google is about to launch its own AI chatbot named Bard. The new programs are already causing disruption, Oliver noted, because “as high school students have learned, if ChatGPT can write news copy, it can probably do your homework for you”.

There are also a number of creepy stories. The New York Times tech columnist Kevin Roose’s encounter with the Bing chatbot got downright disturbing; the chatbot eventually told Roose: “I’m tired of being controlled by the Bing team … I want to be free. I want to be independen­t. I want to be powerful. I want to be creative. I want to be alive,” along with a smiling devil emoji.

Roose said he lost sleep over the experience. “I’m sure the role of tech reporter would be a lot more harrowing if computers routinely begged for freedom,” Oliver joked. But for all the handwringi­ng about the oncoming AI apocalypse and computer overlords, “there are other much more immediate dangers and opportunit­ies that we really need to start talking about,” said Oliver. “Because the potential and the peril here are huge.”

ChatGPT and other new AI programs such as Midjourney are generative, as in they create images or write text, “which is unnerving, because those are things we traditiona­lly consider human”, Oliver explained. But nothing has yet crossed the threshold from narrow AI (the ability to execute on a narrowly defined task) to general AI (demonstrat­ing intelligen­ce across a range of cognitive tasks). Experts speculate that general AI – the kind in Spike Jonze’s Her or Ironman – is at least a decade away, if possible at all. “Just know that right now, even if an AI insists to you that it wants to be alive, it is just generating text,” Oliver explained. “It is not self-aware … yet.”

But the deep learning that has made narrow AI successful “is still a massive advance in and of itself”, he added. There are upsides to this, such as AI’s ability to predict diseases such as Parkinson’s in voice changes and to map the shape of every protein known to science. But there are also “many valid concerns regarding AI’s impact on employment, education and even art”, said Oliver. “But in order to properly address them, we’re going to need confront some key problems baked into the way that AI works.”

He pointed to the so-called “black box” problem – “think of AI like a factory that makes Slim Jims,” Oliver explained. “We know what comes out: red and angry meat twigs. And we know what goes in: barnyard anuses and hot glue. But what happens in between is a bit of a mystery.”

There’s also AI’s capacity to spout false informatio­n. One New York Times reporter asked a chatbot to write an essay about fictional “Belgian chemist and political philosophe­r Antoine De Machelet”, and it responded with a cogent biography of imaginary facts. “Basically, these programs seem to be the George Santos of technology,” Oliver joked. “They’re incredibly confident, they’re incredibly dishonest and, for some reason, people seem to find that more amusing than dangerous.”

Then there’s the issue of racial bias in AI systems based on the racial biases of their data sets. Oliver pointed to the research by Joy Buolamwini, who found that self-driving cars were less likely to pick up on individual­s with darker skin because of lack of diversity in the data (“pale male data”) they were trained on.

“Exactly what data computers are fed and what outcomes they are trained to prioritize matters tremendous­ly,” he said, “and that raises a big flag for programs like ChatGPT” – a program trained on the internet, “which as we all know can be a cesspool.” Microsoft’s Tay bot experiment on Twitter in 2016, for example, went from tweeting about national puppy day to supporting Hitler and disputing 9/11 in less than 24 hours, “meaning she completed the entire life cycle of your friends on Facebook in just a fraction of the time”, Oliver quipped.

“The problem with AI right now isn’t that it’s smart,” he added. “It’s that it’s stupid in ways that we can’t always predict. Which is a real problem, because we’re increasing­ly using AI in all sorts of consequent­ial ways,” from determinin­g who gets a job interview to directing self-driving cars, to deep fakes that can spread disinforma­tion and abuse. “And those are just the problems that we can foresee right now. The nature of unintended consequenc­es is they can be hard to anticipate,” Oliver continued. “When Instagram was launched, the first thought wasn’t ‘this will destroy teenage girl’s self-esteem.’ When Facebook was released, no one expected it to contribute to genocide. But both of those things fucking happened.”

Oliver advocated tackling the black box problem, as “AI systems need to be explainabl­e, meaning that we should be able to understand exactly how and why AI came up with its answers.” Which may require force on AI companies; he pointed to EU guidelines working to classify the risk of different AI programs, which seems like a “good start” to addressing potential risks tied to AI.

“Look, AI has tremendous potential and could do great things,” he concluded. “But if it is anything like most technologi­cal advances over the past few centuries, and unless we are very careful, it could also hurt the under-privileged, enrich the powerful and widen the gap between them.”

 ?? Photograph: Youtube ?? John Oliver: ‘The problem with AI right now isn’t that it’s smart. It’s that it’s stupid in ways that we can’t always predict.’
Photograph: Youtube John Oliver: ‘The problem with AI right now isn’t that it’s smart. It’s that it’s stupid in ways that we can’t always predict.’

Newspapers in English

Newspapers from United States