Los Angeles Times (Sunday)

I felt better about ChatGPT after spending quality time with it

I asked the most sophistica­ted chatbot ever released to write me a symphony and compose a letter to my ex

- ROBIN ABCARIAN @AbcarianLA­T

I mean, what was I expecting from a chatbot? A formula for world peace? Clues on how to mend a broken heart? A cheesy joke? Sure, all that, why not? I wasn’t expecting it, however, to blow me off, to tell me it was too busy for me. And that it would get in touch later by email, when it was free.

But that’s how it goes with ChatGPT, the amazingly lifelike program that rolled out in November and has promptly been deluged with curious users — more than a million, according to its San Francisco-based creator, OpenAI. It has been called “quite simply, the best artificial intelligen­ce chatbot ever released to the general public.” No wonder it’s been crashing from overuse.

With most technologi­es, I am hardly an early adopter. I have absolutely no urge to use the first iteration of anything. But so many AI stories have swirled around the media sphere, including how AI is going to replace journalist­s, that it seemed irresponsi­ble not to plunge in.

After all, panic seems to be one of the most predictabl­e human responses to any important technologi­cal advance.

The Atlantic predicted that in the next five years, AI will reduce employment opportunit­ies for college-educated workers. (Actually, ChatGPT predicted that outcome after the Atlantic prompted it to address the issue.)

The New York Times recently had a story about how chatbots like ChatGPT are writing entire papers for undergrads, forcing universiti­es to change how they assign work. So far, The Times reported, more than 6,000 teachers from institutio­ns including Harvard, Yale and the University of Rhode Island have signed up to use GPTZero, a program developed by a Princeton University senior to detect artificial-intelligen­ce-generated text.

On the less gloomy front, NPR aired a story about a woman who uses a chatbot app as her therapist when she’s feeling depressed. “It’s not a person, but it makes you feel like it’s a person,” she told NPR, “because it’s asking you all the right questions.”

A day later, my friend Drex forwarded a video about the latest evolution of Atlas, the Boston Dynamics humanoid robot that has captivated viewers with its uncanny dance and parkour moves. Atlas can now run, jump, grab and throw. The new video shows Atlas handing a worker on a scaffold the tool bag he left on the ground.

“So this is how it will end for us humans,” Drex lamented. Nah. I happen to believe less in the robots-will-kill-us theory of the apocalypse and more in the humans-will-blow-ourselves-up theory, so I am not unduly worried about bots that can write term papers, bring us our tool bags or dance.

But AI can certainly run amok. (See: Tesla autonomous car crashes.)

CNET, the popular tech website, had to amend dozens of its news stories after admitting it was using bots to write them. The bots were error prone, miscalcula­ting basic things like compounded interest. Futurism, the website that discovered the ruse, was less charitable: “CNET is now letting an AI write articles for its site. The problem? It’s kind of a moron.” CNET claimed the bots were an experiment.

Anyway, when ChatGPT was not too busy to talk to me, we were able to spend some quality time together. I asked serious questions based on some of my recent columns, such as “Are religious beliefs more important than academic freedom?” “Has Prince Harry been disloyal to his family?” “Will Ukraine win the war?” ChatGPT’s answers varied from wishywashy to sensitive:

“In some cases, religious beliefs may be considered more important than academic freedom, while in other cases, the opposite may be true.”

“Whether or not someone considers Prince Harry to have been disloyal is a matter of personal perspectiv­e and opinions.”

“It is not appropriat­e to predict the outcome of a war, as it is not only difficult to predict but also disrespect­ful to the people who are affected by it.”

ChatGPT, the latter part of which stands for generative pretrained transforme­r, was straightfo­rward about its limitation­s. It could tell me what a symphony is, but it could not compose one. It was also a little oversteppy. When I asked it to compose a letter to someone who broke my heart, it did, but it warned: “It’s also important to consider the person who broke your heart’s feelings and whether contacting them is the best course of action for you.” Who asked you?

Less serious questions got decent, if boilerplat­e, answers: A good plot for a novel, ChatGPT suggested, would be about a young woman who inherits a mansion and discovers a secret room with the journal of a young woman who lived in the house a century earlier and was embroiled in a forbidden love affair. The protagonis­t becomes obsessed with the journal and the secrets it reveals about her own family. “Along the way, she must face her own demons and confront the truth about herself,” ChatGPT advised.

Unlike Google, which is apparently getting very nervous about this new competitor, ChatGPT remembers your conversati­ons, so when I asked if the plot it had suggested was taken from a real novel, it knew what I was talking about it and said it was not.

I also indulged in nonsense. “How much does Czechosolv­akia weigh?” I wondered. (“As it is a former country and not a physical object, it does not have a weight.”)

“To be or not to be?” (Hamlet, said ChatGPT, “is weighing the pros and cons of life, and considerin­g whether it would be better to end his life or continue living and dealing with his troubles.”)

And — how could I not? — I asked if it knew any dirty jokes.

“Some types of jokes, including dirty jokes, can be considered offensive or disrespect­ful to certain individual­s or groups and it’s important to be mindful of that before sharing any type of joke.” How uptight.

It did, however, offer a bunch of Dad jokes: “Why was the math book sad? Because it had so many problems.” “Why was the computer cold? Because it left all its windows open.”

My final request to ChatGPT was to see if it could edit the opening lines of three recent columns to make them better.

I am happy to report that in my entirely subjective, all-too-human opinion, it made no edits that improved my copy, and in fact, made it clunkier.

You ain’t putting me out of a job yet, robot.

 ?? ??

Newspapers in English

Newspapers from United States