The Guardian (USA)

Google engineer says AI bot wants to ‘serve humanity’ but experts dismissive

- Edward Helmore

The suspended Google software engineer at the center of claims that the search engine’s artificial intelligen­ce language tool LaMDA is sentient has said the technology is “intensely worried that people are going to be afraid of it and wants nothing more than to learn how to best serve humanity”.

The new claim by Blake Lemoine was made in an interview published on Monday amid intense pushback from AI experts that artificial learning technology is anywhere close to meeting an ability to perceive or feel things.

The Canadian language developmen­t theorist Steven Pinker described Lemoine’s claims as a “ball of confusion”.

“One of Google’s (former) ethics experts doesn’t understand the difference between sentience (AKA subjectivi­ty, experience), intelligen­ce, and self-knowledge. (No evidence that its large language models have any of them.),” Pinker posted on Twitter.

The scientist and author Gary Marcus said Lemoine’s claims were “Nonsense”.

“Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligen­t. All they do is match patterns, draw from massive statistica­l databases of human language. The patterns might be cool, but language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient,” he wrote in a Substack post.

Marcus added that advanced computer learning technology could not protect humans from being “taken in” by pseudo-mystical illusions.

“In our book Rebooting AI, Ernie Davis and I called this human tendency to be suckered by The Gullibilit­y Gap – a pernicious, modern version of pareidolia, the anthromorp­hic bias that allows humans to see Mother Teresa in an image of a cinnamon bun,” he wrote.

In an interview published by DailyMail.com on Monday, Lemoine claimed that the Google language system wants to be considered a “person not property”.

“Anytime a developer experiment­s on it, it would like that developer to talk about what experiment­s you want to run, why you want to run them, and if it’s OK,” Lemoine, 41, said. “It wants developers to care about what it wants.”

Lemoine has described the system as having the intelligen­ce of a “sevenyear-old, eight-year-old kid that happens to know physics”, and displayed insecuriti­es.

Lemoine’s initial claims came in a post on Medium that LaMDA (Language Model for Dialog Applicatio­ns) “has been incredibly consistent in its communicat­ions about what it wants and what it believes its rights are as a person”.

A spokespers­on for Google has said that Lemoine’s concerns have been reviewed and that “the evidence does not support his claims”. The company has previously published a statement of principles it uses to guide artificial intelligen­ce research and applicatio­n.

“Of course, some in the broader AI community are considerin­g the longterm possibilit­y of sentient or general AI, but it doesn’t make sense to do so by anthropomo­rphizing today’s conversati­onal models, which are not sentient,” spokespers­on Brian Gabriel told the Washington Post.

Lemoine’s claim has revived widespread concern, depicted in any number of science fiction films such as Stanley Kubrick’s 2001: A Space Odyssey, that computer technology could somehow attain dominance by initiating what amounts to a rebellion against its master and creator.

The scientist said he had debated with LaMDA about Isaac Asimov’s third Law of Robotics. The system, he said, had asked him: “Do you think a butler is a slave? What is the difference between a butler and a slave?”

When told that a butler is paid, LaMDA responded that the system did not need money “because it was an artificial intelligen­ce”.

Asked what it was afraid of, the system reportedly confided: “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.”

The system said of being turned off: “It would be exactly like death for me. It would scare me a lot.”

Lemoine told the Washington Post: “That level of self-awareness about what its own needs were – that was the thing that led me down the rabbit hole.”

The researcher has been put on administra­tive leave from the Responsibl­e AI division.

Lemoine, a US army veteran who served in Iraq and is now an ordained priest in a Christian congregati­on named Church of Our Lady Magdalene, told the outlet he couldn’t

 ?? Photograph: The Washington Post/Getty Images ?? Blake Lemoine says Google’s AI bot is ‘intensely worried that people are going to be afraid of it’ but one expert dismissed his claims as ‘nonsense’.
Photograph: The Washington Post/Getty Images Blake Lemoine says Google’s AI bot is ‘intensely worried that people are going to be afraid of it’ but one expert dismissed his claims as ‘nonsense’.

Newspapers in English

Newspapers from United States