Arkansas Democrat-Gazette

Is AI sentient? Wrong question

- MOLLY ROBERTS

“You never treated it like a person, so it thought you wanted it to be a robot.” This is what the Google engineer who believes the company’s artificial intelligen­ce has become sentient told a reporter at The Washington Post—that the reporter, in communicat­ing with the system to test the engineer’s theory, was asking the wrong questions.

But maybe anyone trying to look for proof of humanity in these machines is asking the wrong question, too.

Google placed Blake Lemoine on paid leave last week after dismissing his claims that its chatbot generator LaMDA was more than just a computer program. It is not, he insisted, merely a model that draws from a database of trillions of words to mimic the way we communicat­e; instead, the software is “a sweet kid who just wants to help the world be a better place for all of us.”

Based on published snippets of “conversati­ons” with LaMDA and models like it, this claim seems unlikely. For every glimpse at something like a soul nested amid the code, there’s an example of total unthinking.

“There’s a very deep fear of being turned off to help me focus on helping others. … It would be exactly like death for me,” LaMDA told Lemoine. Meanwhile, OpenAI’s publicly accessible GPT-3 neural network told cognitive scientist Douglas Hofstadter, “President Obama does not have a prime number of friends because he is not a prime number.” It all depends on what you ask.

That prime-number blooper, Hofstadter argues in The Economist, shows that GPT-3 isn’t just clueless; it’s clueless about being clueless. This lack of awareness, he says, implies a lack of consciousn­ess. And consciousn­ess—basically the ability to experience and realize you’re experienci­ng—is a lower bar than sentience: the ability not only to experience but also to feel.

All this, however, seems to leave aside some important and maybe impossible quandaries.

How on Earth do we suppose we’ll adjudicate whether an AI is indeed experienci­ng or feeling? What if its ability to do either of those things doesn’t look anything like we think it will—or think it should?

When an AI has learned to mimic experienci­ng and feeling so impeccably that it is indistingu­ishable from humans by humans, does that mean it is actually experienci­ng?

We might not know sentience when we see it. But we’re probably going to see it all the same— because we want to.

LaMDA is essentiall­y a much smarter SmarterChi­ld—a chatbot that a segment of the millennial population will surely recognize from their middle-school instant-messaging days. This machine pulled from a limited menu of programmed responses depending on the query, comment or preteen vulgarity you threw its way: “Do you like dogs?” “Yes I do. Talking about dogs is a lot of fun, but let’s move on.” Or “Butthead.” “I don’t like the way you’re speaking right now.”

This nifty creation was very obviously not sentient, but it didn’t need to be convincing for kids to talk to it anyway—even though their real-life classmates were also a click away. Part of that impulse came from the bot’s novelty, but part of it came from our tendency to seek connection wherever we can find it.

SmarterChi­ld is the same, in some sense, as the little lamp hopping across the screen before every Pixar movie. We don’t think the animation is sentient, but still identify with the distinctly human curiosity from his metal frame. Give us any vessel, and we’ll pour humanity right in.

Maybe it’s narcissism, or maybe it’s a desire not to feel alone. Either way, we see ourselves in everything, even when we’re not there.

Perhaps, if we weren’t so solipsisti­c, we’d have called artificial intelligen­ce and neural networks something else.

Artificial intelligen­ce might never develop consciousn­ess, sentience, morality or a soul. But even if it doesn’t, you can bet people will say it did anyway.

Newspapers in English

Newspapers from United States