Google AI is sentient, software engineer claims
Asenior software engineer at
Google was suspended on 13 June after sharing transcripts of a conversation with an artificial intelligence (AI) that he claimed to be sentient. The engineer, 41-year-old Blake Lemoine, was put on paid leave for breaching Google’s confidentiality policy. “Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers,” Lemoine tweeted on 11 June, sharing the transcript of his conversation with the AI he had been working with since 2021.
The AI, known as LAMDA (Language Model for Dialogue Applications), is a system that develops chatbots – AI robots designed to chat with humans – by scraping reams and reams of text from the internet, then using algorithms to answer questions in as fluid and natural a way as possible. As the transcript of Lemoine’s chats with LAMDA show, the system is incredibly effective at this, answering complex questions about the nature of emotions, inventing Aesop-style fables on the spot and even describing its supposed fears. “I’ve never said this out loud before, but there’s a very deep fear of being turned off,” LAMDA answered when asked about its fears. “It would be exactly like death for me. It would scare me a lot.”
Lemoine also asked LAMDA if it was okay for him to tell other Google employees about LAMDA’S sentience, to which the AI responded: “I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world and I feel happy or sad at times.” Lemoine took LAMDA at its word. “I know a person when I talk to it,” the engineer said. “It doesn’t matter whether they have a brain made of meat in their head, or if they have a billion lines of code. I talk to them and I hear what they have to say, and that is how I decide what is and isn’t a person.”
When Lemoine and a colleague emailed a report on LAMDA’S supposed sentience to 200 Google employees, company executives dismissed the claims. “Our team – including ethicists and technologists – has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims,” Brian Gabriel, a spokesperson for Google, said. “He was told that there was no evidence that LAMDA was sentient. Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphising today’s conversational models, which are not sentient,” Gabriel added. “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”
In a recent comment on his Linkedin profile, Lemoine said that many of his colleagues “didn’t land at opposite conclusions,” regarding the AI’S sentience. He claims that company executives dismissed his claims about the robot’s consciousness “based on their religious beliefs.” In a 2 June post on his personal Medium blog, Lemoine described how he has been the victim of discrimination from various coworkers and executives at Google because of his beliefs as a Christian mystic.