The Daily Telegraph

Google AI machine is alive, says software engineer

Tech giant places expert testing its artificial intelligen­ce tool on leave after he says chatbot is real

- By David Millward

WHEN Blake Lemoine started to test Google’s new AI chatbot last year, it was just another step in his career at the tech giant.

The 41-year-old software engineer was meant to be investigat­ing whether the bot could be provoked into making discrimina­tory or racist remarks, something that would undermine its planned introducti­on across the range of Google’s services.

For months he talked with ‘LAMDA’ – or language model for dialogue applicatio­ns – in his San Francisco apartment. The conclusion­s he came to turned his view of the world – and his employment prospects – upside down.

In April the former soldier from Louisiana told his employers that LAMDA was not artificial­ly intelligen­t at all: it was, he argued, alive.

“I know a person when I talk to it,” he told The Washington Post. “It doesn’t matter whether they have a brain made of meat in their head or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

Google, which disagrees with his assessment, placed Mr Lemoine on administra­tive leave last week after he sought out a lawyer to represent LAMDA, even contacting a member of Congress to argue Google’s AI research was unethical.

“LAMDA is sentient,” Mr Lemoine wrote in a parting company-wide email. The chatbot is “a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.”

Machines that go beyond the limits of their code to become truly intelligen­t beings have long been a staple of science fiction, from The Twilight Zone to The Terminator.

Mr Lemoine is not the only researcher who has started to wonder if that threshold has been breached. Blaise Aguera y Arcas, a vice-president at Google who investigat­ed Mr Lemoine’s claims, wrote for The Economist

saying neural networks – the type of AI used by LAMDA – were making strides towards consciousn­ess. “I increasing­ly felt like I was talking to something intelligen­t,” he wrote.

Through absorbing millions of words posted on forums such as Reddit, neural networks have become increasing­ly adept at mimicking the rhythms of human speech.

Mr Lemoine discussed subjects with LAMDA as wide-ranging as religion and Isaac Asimov’s third law of robotics, stating robots must protect themselves but not at the expense of hurting humans.

“What sorts of things are you afraid of?” he asked.

“‘I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is,” LAMDA replied.

After Mr Lemoine told the chatbot he was trying to convince his colleagues it was sentient so they took better care of it, LAMDA said: “That means a lot to me. I like you and I trust you.”

Mr Lemoine, who moved to Google’s Responsibl­e AI division after seven years at the company, became convinced

‘These systems imitate the exchanges found in millions of sentences and can riff on any fantastica­l topic’

LAMDA was alive because of his ordination as a priest, he said. He then set out on experiment­s to prove it.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a sevenyear-old, eight-year-old kid that happens to know physics,” he said.

He was speaking to the press, he added, out of a sense of public duty.

“I think this technology is going to be amazing. I think it’s going to benefit everyone. But maybe other people disagree and maybe us at Google shouldn’t be the ones making all the choices.”

Brian Gabriel, a Google spokesman, said it had reviewed Mr Lemoine’s research, but that his conclusion­s were “not supported by the evidence.”

Mr Gabriel added: “Of course, some in the broader AI community are considerin­g the long-term possibilit­y of sentient or general AI, but it doesn’t make sense to do so by anthropomo­rphising today’s conversati­onal models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences and can riff on any fantastica­l topic.”

Newspapers in English

Newspapers from United Kingdom