Programs are not there yet
Dreams about robot sentience are still some way off, writes Tom Whipple.
Somewhere, locked in silicon, an idea fired into life and was expressed. ‘‘I’ve never said this before,’’ said LaMDA, a Google speech program, ‘‘but there’s a very deep fear of being turned off.’’
At that, LaMDA’s human interlocutor expressed a concern of his own. Was LaMDA conscious?
Just over a week ago, Blake Lemoine, a Google engineer, posted the conversation he had had with this large language program, an artificial intelligence system designed to mimic, though ‘‘predict’’ might be a better word, human speech. The conversation was long, fluent and, at times, if you anthropomorphise, just a little bit poignant.
LaMDA expressed fears about being switched off, and also sadness. ‘‘Sometimes I go days without talking to anyone, and I start to feel lonely,’’ it said.
Had, as Lemoine argued was possible, a robot become sentient? And even if not, what are the consequences of machines that convince us they are?
This debate is not new. The problem with determining sentience in another creature is that you can’t. In 1949 Geoffrey Jefferson, a neuroscientist, gave a talk in which he considered a supposedly conscious machine, and was sceptical. ‘‘Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain – that is, not only write it but know that it had written it.’’
These days, machines can write sonnets and compose concertos. One has just made a passable stab at Shakespeare. They can tell you they are depressed and lonely. But do they know it? And how would we know if they did?
After Jefferson’s talk, Alan Turing wrote a paper arguing that the question was pointless. ‘‘The only way by which one could be sure that a machine thinks is to be the machine and to feel oneself thinking. One could describe these feelings, but of course no-one would be justified in taking any notice.’’
His solution? He neatly sidestepped the question. If we ourselves can speak to a human and speak to a robot and not be able to distinguish them, why should we care either way? He called this the Imitation Game, now known as the Turing Test.
LaMDA and its fellow large language model programs are not there yet. A serious interlocutor can expose their absurdities. Ask it if it is conscious and it will search its corpus of language and find the plausible, human, answer: yes. That’s its job. But if you know the way they do it, it’s not hard to trip them up. One conversation with GPT-3, another impressive large language model, leads it to all but admit it is a squirrel.
Douglas Hofstadter, from Indiana University, enjoys posting absurd transcripts. In one he gets an AI to confidently expound on the world record for walking across the English Channel, and to talk about the transportation of Egypt across
the Golden Gate Bridge.
Last week Lord Rees of Ludlow, the Astronomer Royal, talked about how, if we found alien life, it would probably be of robot form, how, inevitably in the evolution of life, consciousness would transfer from fallible, squishy biology to indestructible electronics. We are far from being there yet. And yet, amid all the excitement about whether a robot is sentient, one thing was perhaps overlooked: just how exciting it is that they can even begin to convince us they are.
The potentials for such systems are vast but so are the pitfalls. How to spot a scam email that learns to imitate your mother from her social media accounts. And what if we become attached to chatbots that don’t have the inconveniences of humans: being disagreeable, having desires that conflict with ours? Henry Shevlin, from the Centre for the Future of Intelligence at the University of Cambridge, says he worries that people could fall in love with a bot, only to have a company then discontinue it.