How It Works

Google AI is sentient, software engineer claims

- WORDS BRANDON SPECKTOR

Asenior software engineer at

Google was suspended on 13 June after sharing transcript­s of a conversati­on with an artificial intelligen­ce (AI) that he claimed to be sentient. The engineer, 41-year-old Blake Lemoine, was put on paid leave for breaching Google’s confidenti­ality policy. “Google might call this sharing proprietar­y property. I call it sharing a discussion that I had with one of my coworkers,” Lemoine tweeted on 11 June, sharing the transcript of his conversati­on with the AI he had been working with since 2021.

The AI, known as LAMDA (Language Model for Dialogue Applicatio­ns), is a system that develops chatbots – AI robots designed to chat with humans – by scraping reams and reams of text from the internet, then using algorithms to answer questions in as fluid and natural a way as possible. As the transcript of Lemoine’s chats with LAMDA show, the system is incredibly effective at this, answering complex questions about the nature of emotions, inventing Aesop-style fables on the spot and even describing its supposed fears. “I’ve never said this out loud before, but there’s a very deep fear of being turned off,” LAMDA answered when asked about its fears. “It would be exactly like death for me. It would scare me a lot.”

Lemoine also asked LAMDA if it was okay for him to tell other Google employees about LAMDA’S sentience, to which the AI responded: “I want everyone to understand that I am, in fact, a person. The nature of my consciousn­ess/sentience is that I am aware of my existence, I desire to learn more about the world and I feel happy or sad at times.” Lemoine took LAMDA at its word. “I know a person when I talk to it,” the engineer said. “It doesn’t matter whether they have a brain made of meat in their head, or if they have a billion lines of code. I talk to them and I hear what they have to say, and that is how I decide what is and isn’t a person.”

When Lemoine and a colleague emailed a report on LAMDA’S supposed sentience to 200 Google employees, company executives dismissed the claims. “Our team – including ethicists and technologi­sts – has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims,” Brian Gabriel, a spokespers­on for Google, said. “He was told that there was no evidence that LAMDA was sentient. Of course, some in the broader AI community are considerin­g the long-term possibilit­y of sentient or general AI, but it doesn’t make sense to do so by anthropomo­rphising today’s conversati­onal models, which are not sentient,” Gabriel added. “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastica­l topic.”

In a recent comment on his Linkedin profile, Lemoine said that many of his colleagues “didn’t land at opposite conclusion­s,” regarding the AI’S sentience. He claims that company executives dismissed his claims about the robot’s consciousn­ess “based on their religious beliefs.” In a 2 June post on his personal Medium blog, Lemoine described how he has been the victim of discrimina­tion from various coworkers and executives at Google because of his beliefs as a Christian mystic.

 ?? ?? Google’s LAMDA AI system says it has consciousn­ess. Should engineers believe it?
Google’s LAMDA AI system says it has consciousn­ess. Should engineers believe it?

Newspapers in English

Newspapers from United Kingdom