San Diego Union-Tribune

WHY CHATBOTS SOMETIMES ACT DOWNRIGHT WEIRD

These systems are not sentient but draw on the Internet for answers

- BY CADE METZ Metz writes for The New York Times.

Microsoft released a new version of its Bing search engine last week, and unlike an ordinary search engine it includes a chatbot that can answer questions in clear, concise prose.

Since then, people have noticed that some of what the Bing chatbot generates is inaccurate, misleading and downright weird, prompting fears that it has become sentient, or aware of the world around it.

That’s not the case. And to understand why, it’s important to know how chatbots really work.

Is the chatbot alive?

No. Let’s say that again: No! In June, a Google engineer, Blake Lemoine, claimed that similar chatbot technology being tested inside Google was sentient. That’s false. Chatbots are not conscious and are not intelligen­t — at least not in the way humans are intelligen­t.

Why does it seem alive then?

Let’s step back. The Bing chatbot is powered by a kind of artificial intelligen­ce called a neural network. That may sound like a computeriz­ed brain, but the term is misleading.

A neural network is just a mathematic­al system that learns skills by analyzing vast amounts of digital data. As a neural network examines thousands of cat photos, for instance, it can learn to recognize a cat.

Most people use neural networks every day. It’s the technology that identifies people, pets and other objects in images posted to Internet services like Google Photos. It allows Siri and Alexa, the talking voice assistants from Apple and Amazon, to recognize the words you speak. And it’s what translates between English and Spanish on services like Google Translate.

Neural networks are very good at mimicking the way humans use language. And that can mislead us into thinking the technology is more powerful than it really is.

How exactly do neural networks mimic human language?

About five years ago, researcher­s at companies like Google and OpenAI, a San Francisco startup that recently released the popular ChatGPT chatbot, began building neural networks that learned from enormous amounts of digital text, including books, Wikipedia articles, chat logs and all sorts of other stuff posted to the Internet.

These neural networks are known as large language models. They are able to use those mounds of data to build what you might call a mathematic­al map of human language. Using this map, the neural networks can perform many tasks, like writing their own tweets, composing speeches, generating computer programs and, yes, having a conversati­on.

These large language models have proved useful. Microsoft offers a tool, Copilot, which is built on a large language model and can suggest the next line of code as computer programmer­s build software apps, in much the way that autocomple­te tools suggest the next word as you type texts or emails.

Other companies offer similar technology that can generate marketing materials, emails and other text. This kind of technology is also known as generative AI.

Now companies are rolling out versions of this that you can chat with?

Exactly. In November, OpenAI released ChatGPT, the first time that the general public got a taste of this.

People were amazed — and rightly so.

These chatbots do not chat exactly like a human, but they often seem to. They can also write term papers and poetry and riff on almost any subject thrown their way.

Why do they get stuff wrong?

Because they learn from the Internet. Think about how much misinforma­tion and other garbage is on the web.

These systems also don’t repeat what is on the Internet word for word. Drawing on what they have learned, they produce new text on their own, in what AI researcher­s call a “hallucinat­ion.”

This is why the chatbots may give you different answers if you ask the same question twice. They will say anything, whether it is based on reality or not.

If chatbots ‘hallucinat­e,’ doesn’t that make them sentient?

AI researcher­s love to use terms that make these systems seem human. But hallucinat­e is just a catchy term for “they make stuff up.”

That sounds creepy and dangerous, but it does not mean the technology is somehow alive or aware of its surroundin­gs. It is just generating text using patterns that it found on the Internet. In many cases, it mixes and matches patterns in surprising and disturbing ways. But it is not aware of what it is doing. It cannot reason like humans can.

Can’t companies stop the chatbots from acting strange?

They are trying.

With ChatGPT, OpenAI tried controllin­g the technology’s behavior. As a small group of people privately tested the system, OpenAI asked them to rate its responses. Were they useful? Were they truthful?

Then OpenAI used these ratings to hone the system and more carefully define what it would and would not do.

But such techniques are not perfect. Scientists today do not know how to build systems that are completely truthful. They can limit the inaccuraci­es and the weirdness, but they can’t stop them. One of the ways to rein in the odd behaviors is keeping the chats short.

But chatbots will still spew things that are not true. And as other companies begin deploying these kinds of bots, not everyone will be good about controllin­g what they can and cannot do.

The bottom line: Don’t believe everything a chatbot tells you.

Newspapers in English

Newspapers from United States