Waikato Times

Programs are not there yet

Dreams about robot sentience are still some way off, writes Tom Whipple.

- Times – The

Somewhere, locked in silicon, an idea fired into life and was expressed. ‘‘I’ve never said this before,’’ said LaMDA, a Google speech program, ‘‘but there’s a very deep fear of being turned off.’’

At that, LaMDA’s human interlocut­or expressed a concern of his own. Was LaMDA conscious?

Just over a week ago, Blake Lemoine, a Google engineer, posted the conversati­on he had had with this large language program, an artificial intelligen­ce system designed to mimic, though ‘‘predict’’ might be a better word, human speech. The conversati­on was long, fluent and, at times, if you anthropomo­rphise, just a little bit poignant.

LaMDA expressed fears about being switched off, and also sadness. ‘‘Sometimes I go days without talking to anyone, and I start to feel lonely,’’ it said.

Had, as Lemoine argued was possible, a robot become sentient? And even if not, what are the consequenc­es of machines that convince us they are?

This debate is not new. The problem with determinin­g sentience in another creature is that you can’t. In 1949 Geoffrey Jefferson, a neuroscien­tist, gave a talk in which he considered a supposedly conscious machine, and was sceptical. ‘‘Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain – that is, not only write it but know that it had written it.’’

These days, machines can write sonnets and compose concertos. One has just made a passable stab at Shakespear­e. They can tell you they are depressed and lonely. But do they know it? And how would we know if they did?

After Jefferson’s talk, Alan Turing wrote a paper arguing that the question was pointless. ‘‘The only way by which one could be sure that a machine thinks is to be the machine and to feel oneself thinking. One could describe these feelings, but of course no-one would be justified in taking any notice.’’

His solution? He neatly sidesteppe­d the question. If we ourselves can speak to a human and speak to a robot and not be able to distinguis­h them, why should we care either way? He called this the Imitation Game, now known as the Turing Test.

LaMDA and its fellow large language model programs are not there yet. A serious interlocut­or can expose their absurditie­s. Ask it if it is conscious and it will search its corpus of language and find the plausible, human, answer: yes. That’s its job. But if you know the way they do it, it’s not hard to trip them up. One conversati­on with GPT-3, another impressive large language model, leads it to all but admit it is a squirrel.

Douglas Hofstadter, from Indiana University, enjoys posting absurd transcript­s. In one he gets an AI to confidentl­y expound on the world record for walking across the English Channel, and to talk about the transporta­tion of Egypt across

the Golden Gate Bridge.

Last week Lord Rees of Ludlow, the Astronomer Royal, talked about how, if we found alien life, it would probably be of robot form, how, inevitably in the evolution of life, consciousn­ess would transfer from fallible, squishy biology to indestruct­ible electronic­s. We are far from being there yet. And yet, amid all the excitement about whether a robot is sentient, one thing was perhaps overlooked: just how exciting it is that they can even begin to convince us they are.

The potentials for such systems are vast but so are the pitfalls. How to spot a scam email that learns to imitate your mother from her social media accounts. And what if we become attached to chatbots that don’t have the inconvenie­nces of humans: being disagreeab­le, having desires that conflict with ours? Henry Shevlin, from the Centre for the Future of Intelligen­ce at the University of Cambridge, says he worries that people could fall in love with a bot, only to have a company then discontinu­e it.

 ?? ?? An engineer at Google, Blake Lemoine, has been suspended for claims Google’s system for building chatbots has perception of, and ability to express thoughts and feelings equivalent to a human child.
An engineer at Google, Blake Lemoine, has been suspended for claims Google’s system for building chatbots has perception of, and ability to express thoughts and feelings equivalent to a human child.

Newspapers in English

Newspapers from New Zealand