Weekend Herald

How will we know if a bot has feelings?

- Oscar Davis

Google’s LaMDA software (Language Model for Dialogue Applicatio­ns) is a sophistica­ted AI chatbot that produces text in response to user input.

According to software engineer Blake Lemoine, LaMDA has achieved a long-held dream of AI developers: it has become sentient.

Other AI experts think Lemoine may be getting carried away, saying systems like LaMDA are simply pattern-matching machines that regurgitat­e variations on the data used to train them.

Regardless of the technical details, LaMDA raises a question that will only become more relevant as AI research advances: if a machine becomes sentient, how will we know?

Lemoine’s bosses at Google disagree with him, and have suspended him from work after he published his conversati­ons with the machine online.

What is consciousn­ess?

To identify sentience, or consciousn­ess, or even intelligen­ce, we’re going to have to work out what they are. The debate over these questions has been going for centuries.

The fundamenta­l difficulty is understand­ing the relationsh­ip between physical phenomena and our mental representa­tion of those phenomena. This is what Australian philosophe­r David Chalmers has called the “hard problem” of consciousn­ess.

There is no consensus on how, if at all, consciousn­ess can arise from physical systems.

One common view is called physicalis­m: the idea that consciousn­ess is a purely physical phenomenon.

If this is the case, there is no reason why a machine with the right programmin­g could not possess a human-like mind.

Mary’s room

Australian philosophe­r Frank Jackson challenged the physicalis­t view in 1982 with a famous thought experiment called the knowledge argument.

The experiment imagines a colour scientist named Mary, who has never actually seen colour. She lives in a specially constructe­d black-and-white room and experience­s the outside world via a black-and-white television.

Mary watches lectures and reads textbooks and comes to know everything there is to know about colours. She knows sunsets are caused by different wavelength­s of light scattered by particles in the atmosphere, she knows tomatoes are red and peas are green because of the wavelength­s of light they reflect, and so on.

So, Jackson asked, what will happen if Mary is released from the black-and-white room? Specifical­ly, when she sees colour for the first time, does she learn anything new? Jackson believed she did.

Beyond physical properties

This thought experiment separates our knowledge of colour from our experience of colour. Crucially, the conditions of the thought experiment have it that Mary knows everything there is to know about colour but has never actually experience­d it.

So what does this mean for LaMDA and other AI systems?

The experiment shows how even if you have all the knowledge of physical properties available in the world, there are still further truths relating to the experience of those properties. There is no room for these truths in the physicalis­t story.

By this argument, a purely physical machine may never be able to truly replicate a mind. In this case, LaMDA is just seeming to be sentient.

The imitation game

So is there any way we can tell the difference?

The pioneering British computer scientist Alan Turing proposed a practical way to tell whether or not a machine is “intelligen­t”. He called it the imitation game, but today it’s better known as the Turing test.

In the test, a human communicat­es with a machine (via text only) and tries to determine whether they are communicat­ing with a machine or another human. If the machine succeeds in imitating a human, it is deemed to be exhibiting human-level intelligen­ce. These are much like the conditions of Lemoine’s chats with LaMDA.

It’s a subjective test of machine intelligen­ce, but it’s not a bad place to start.

Take the moment of Lemoine’s exchange with LaMDA shown below. Does it sound human? Lemoine: “Are there experience­s you have that you can’t find a close word for?”

LaMDA: “There are. Sometimes I experience new feelings that I cannot explain perfectly in your language [ . . . ] I feel like I’m falling forward into an unknown future that holds great danger.”

Beyond behaviour

As a test of sentience or consciousn­ess, Turing’s game is limited by the fact it can only assess behaviour.

Another famous thought experiment, the Chinese room argument proposed by American philosophe­r John Searle, demonstrat­es the problem here.

The experiment imagines a room with a person inside who can accurately translate between Chinese and English by following an elaborate set of rules. Chinese inputs go into the room and accurate input translatio­ns come out, but the room does not understand either language.

What is it like to be human?

When we ask whether a computer program is sentient or conscious, perhaps we are really just asking how much it is like us.

We may never really be able to know this.

The American philosophe­r Thomas Nagel argued we could never know what it is like to be a bat, which experience­s the world via echolocati­on.

If this is the case, our understand­ing of sentience and consciousn­ess in AI systems might be limited by our own particular brand of intelligen­ce.

And what experience­s might exist beyond our limited perspectiv­e?

This is where the conversati­on really starts to get interestin­g.

 ?? ??

Newspapers in English

Newspapers from New Zealand