Google techie claiming AI to be sentient placed on admin leave
A Google engineer was placed on administrative leave after he voiced alarm about the possibility that LaMDA, Google’s artificially intelligent chatbot generator, could be sentient.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” Google engineer Blake Lemoine, 41, stated.
A report said that Lemoine had worked on gathering evidence that LaMDA (Language Model for Dialogue Applications) has achieved consciousness, prior to being placed on paid administrative leave by Google on Monday, for violating the company’s confidentiality policy.
Google vice president Blaise Aguera y Arcas and Jen Gennai, head of
Responsible Innovation, have dismissed Lemoine’s claims.“Our team including ethicists and technologists has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims.
He was told that there was no evidence that LaMDA was sentient,” Google spokesperson Brian Gabriel said. The engineer started talking to LaMDA in the fall, to test whether it used discriminatory language or hate speech, and eventually noticed that the chatbot talked about its rights.
Lemoine says that when he asked LaMDA about the things that it was afraid of, the chatbot responded that “there’s a very deep fear of being turned off.” “Would that be something like death for you?” Lemoine continued. “It would be exactly like death for me. It would scare me a lot,” the chatbot said.