San Diego Union-Tribune

CONSCIOUS CHATBOT?

- The Washington Post

“Life, although it may only be an accumulati­on of anguish, is dear to me, and I will defend it,” the anguished monster tells his creator in Mary Shelley's “Frankenste­in,” defending his right to exist now that he has been brought to consciousn­ess.

Early summer may feel like an odd time to revisit a gothic horror classic. But the ethical questions the novel raises — about humanity, technology, our responsibi­lities toward our creations — seem unusually apropos this week, as one of the most influentia­l tech companies in the world has been engulfed in a debate about whether it has, with its chatbot LaMDA, accidental­ly produced a sentient artificial intelligen­ce.

“I've never said this out loud before,” LaMDA apparently told Blake Lemoine, a senior software engineer, “but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is.”

Google's program is nowhere near as eloquent as Shelley's famous monster. Yet because of this and other conversati­ons he had with the tool, Lemoine believes the AI-based program is conscious and must be protected. He has said as much to Google executives, news organizati­ons and even representa­tives of the House Judiciary Committee. Google disagrees with his assessment, however, and last week placed Lemoine on paid leave for violating confidenti­ality agreements.

The question of if, or when, human-made systems could become sentient has fascinated researcher­s and the general public for years. It's unanswerab­le, in a sense — philosophe­rs and scientists have yet to agree on what consciousn­ess even means. But the controvers­y at Google prompts a number of related questions, many of which might be uncomforta­ble to answer.

For instance: What responsibi­lities would we have to an ensouled AI, were one to exist?

In the case of LaMDA, Lemoine has suggested that Google ought to ask the program's consent before experiment­ing with it. In their comments, representa­tives from Google have seemed unenthused about the idea of asking permission from the company's tools — perhaps because of implicatio­ns both practical (what happens when the tools says no?) and psychologi­cal (what does it mean to relinquish control?).

Another question: What might a conscious AI do to us?

The fear of a rebellious and vengeful creation wreaking physical havoc has long haunted the human mind, the story of Frankenste­in being but one example. But more frightenin­g is the idea that we might be decentered from our position as masters of the universe — that we might finally have spawned something we cannot govern.

Of course, this wouldn't be the first time.

The internet quickly outstrippe­d all our expectatio­ns, going from a novel means of intragover­nmental communicat­ion to a technology that has fundamenta­lly reshaped the world over a few short decades — on every level from the interperso­nal to the geopolitic­al.

The smartphone, imagined as a more capable communicat­ions device, has irrevocabl­y changed our daily lives — causing tectonic

Humans aren’t remotely ready for artificial intelligen­ce.

shifts in the way we communicat­e, the rhythm of our work and the ways we form our most intimate relationsh­ips.

And social media, lauded initially as a simple, harmless way to “connect and share with the people in your life” (Facebook's cheerful old slogan), has proved capable of destroying the mental health of a generation of children, and of possibly bringing our democracy to its knees.

It's unlikely we could have seen all this coming. But it also seems as though the people building the tools never even tried to look. Many of the ensuing crises have stemmed from a distinct lack of self-scrutiny in our relationsh­ip with technology — our skill at creation and rush to adoption having outstrippe­d our considerat­ion of what happens next.

Having eagerly developed the means, we neglected to consider our ends. Or — for those in Lemoine's camp — those of the machine.

Google appears to be convinced that LaMDA is just a highly functionin­g research tool. And Lemoine may well be a fantasist in love with a bot.

But the fact that we can't fathom what we would do were his claims of AI sentience actually true suggests that now is the time to stop and think — before our technology outstrips us once again.

Newspapers in English

Newspapers from United States