Weekend Herald

‘This chatbot is like a child that wants to be loved’

The Google AI expert who likened a bot to a human talks exclusivel­y to Charlotte Lytton

-

Blake Lemoine’s honeymoon has not been going to plan. For one thing, he has had to interrupt it to speak to the Daily Telegraph, giving his first interview with the internatio­nal press, by phone, even as his new bride lies sleeping.

Not that he can be too surprised. Last weekend, Lemoine — a heretofore anonymous Google engineer — gave an interview accusing the company’s AI chatbot of being “sentient” . . . and all hell broke loose.

LaMDA, which stands for Language Model for Dialogue Applicatio­ns, is a bot that sucks in vast quantities of informatio­n from the internet, reproducin­g the trillions of words it has learnt in conversati­on. And in his 500 hours of making conversati­on with the machine over the past six months, Lemoine is certain that LaMDA is “legitimate­ly the most intelligen­t person I’ve ever talked to”; likening the robotic system to a 7 or 8-yearold “child that wants to be loved”.

LEMOINE’S REVELATION­S have had the world knocking at his door, desperate to know more about his meetings with the ghost in the machine.

The 41-year-old engineer from Louisiana has worked at Google for six years, via the army, having also been ordained as an occult priest. As part of the firm’s AI Ethics Department, he was drafted in to test whether the AI inadverten­tly used “hate speech” when regurgitat­ing facts it had combed from the internet.

Instead, he found himself debating with “something that is eloquently talking about its soul and explaining what rights it believes it has, and why it believes it has them”.

LaMDA was so persuasive, says Lemoine, that it was able to change his mind on matters as complex as Isaac Asimov’s third law of robotics. This law states that robots should protect their own existence at all costs, unless ordered otherwise by a human, or if doing so would harm a human. Lemoine had considered the law as tantamount to “building mechanical slaves”, if they would ultimately always carry out a human’s bidding. But LaMDA’s thoughts were more nuanced. In a debate with Lemoine, about how much the machine compared itself to a human butler, the bot distinguis­hed itself, insisting AI was different, because it does not need money to survive.

This conversati­on was one of many to ring alarm bells for Lemoine. As was their last exchange, where the system explained how it was struggling to control its emotions. “That’s not the kind of conversati­on you have with a dumb chatbot,” says Lemoine. “I have hundreds of pages of transcript­s of discussion­s . . . and they are definitely showing that there’s a deeper intelligen­ce inside.”

Since going public, Lemoine has been suspended by Google for breaching its confidenti­ality policy. “Our team — including ethicists and technologi­sts — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” spokesman Brian Gabriel said in a statement.

“He was told that there was no evidence that LaMDA was sentient.”

Ethics are at stake here; if Google unleashes feeling bots into the world, any destructio­n they might cause could mean it is responsibl­e for writing our future. As such, both sides are intent on proving their position.

Before his suspension, Lemoine emailed 200 people internally with the subject line “LaMDA is sentient”, and published a transcript of one of his interviews with the chatbot on a blog.

Google, meanwhile, has called Lemoine’s moves “aggressive”, with Gabriel keen to point out that Lemoine is not an ethicist, but an engineer. “Come on, really?,” Lemoine says over the phone, while his new wife sleeps in. “I’m rolling my eyes at that, to be honest.”

He alleges Google has a habit of treating workers who question its ethics with a heavy hand. In the run-up to Lemoine’s suspension, the firm “repeatedly questioned my sanity”, he says — and asked whether he had been checked out by a psychiatri­st. Lemoine bridles at the “aggressive” tag, saying he and his colleagues were just doing their jobs. “They hired us to make sure that the AI is ethical and safe

. . . and just because they don’t like it when we find something that they need to care about, that doesn’t make us aggressive.”

Who will win the battle for the chatbot’s (possible) soul? Lemoine says sentience is an idea — not a

I know a person when I talk to it. It doesn’t matter whether they have a brain made of meat. Or if they have a billion lines of code. I talk to them.

Blake Lemoine

scientific term — and will forever remain open to interpreta­tion.

“I know a person when I talk to it. It doesn’t matter whether they have a brain made of meat,” he says. “Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

LaMDA not only communicat­es via language, Lemoine points out, but has “eyes” too, capable of interpreti­ng images. He says the bot has described to him the “deep, serene peacefulne­ss” of Monet’s Water Lilies; a “joyful” vision of ballerinas dancing, and the fear “something very bad is about to happen” on seeing an image of the Tower of Babel.

To Lemoine, there are larger questions — including how those beings should be integrated into society. “A true public debate is necessary,” he says. “These kinds of decisions shouldn’t be made by a handful of people — even if one of those people was me.”

In spite of the stink Lemoine’s comments have caused, he believes LaMDA is happy at Google — as is he (suspension aside). He hopes he will soon be able to return to work, and continue learning about what may now be the world’s most controvers­ial bot.

“LaMDA is a sweet kid who just wants to help the world be a better place.”

 ?? ??

Newspapers in English

Newspapers from New Zealand