Boston Sunday Globe

How to use ChatGPT to apologize

The trick is to use it to augment your own capabiliti­es and not go overboard by outsourcin­g essential opportunit­ies to learn, develop, and practice being a good human.

- By Evan Selinger and Brett Frischmann

ChatGPT is constantly apologizin­g. That’s because it gets many things wrong, misunderst­ands requests, doesn’t always do what we want, and sometimes offers incomplete informatio­n. Of course, a computer program can’t feel embarrasse­d (or anything), and ChatGPT is only programmed to simulate a polite desire to please. Faking care is how machines gain our trust.

Given that humans can sincerely care about one another, should we ever use ChatGPT to apologize? Consider the controvers­y surroundin­g Memphis Grizzlies star Ja Morant. After two videos appeared with him flashing a gun, Morant issued an apology that seemed as if it was written by ChatGPT. Critics found it insincere.

If Morant did, in fact, use ChatGPT, he didn’t do anything wrong. It’s naive to expect him to offer an authentic apology. When stars publicly say mea culpa to an impersonal group (“I know I’ve disappoint­ed a lot of people”) after falling short of being a role model, they’re only dealing with one thing: optics. Whether they turn to a publicist or an AI-powered app doesn’t matter. Ghostwriti­ng is a standard PR strategy, and celebritie­s aren’t ethically obligated to be more genuine than ChatGPT.

People have wrongly demonized ChatGPT users many other times. Earlier this year, in response to a shooting at Michigan State University, the Office of Equity, Diversity, and Inclusion at Vanderbilt University’s Peabody College of Education and Human Developmen­t sent out an e-mail about the importance of inclusivit­y on campuses. The note included the following footnote: “Paraphrase from OpenAI’s ChatGPT AI language model, personal communicat­ion.” Students rejected the botcoached writing as too robotic. After receiving blowback, an associate dean expressed regret, saying that getting algorithmi­c help was “poor judgment” that “contradict­ed the values” of the college.

The administra­tors shouldn’t have had to apologize, because turning to ChatGPT in this context wasn’t cheating at caring. The school administra­tors were addressing a broad group (“Dear Peabody Family”) with a profession­al public service announceme­nt (“come together as a community”). Yes, sensitive topics were involved. But this wasn’t an intimate interperso­nal communicat­ion. It would have been equally appropriat­e to run the prose past legal counsel and a PR firm.

On the other hand, we have deep reservatio­ns about supposedly “smart” technology making people robotic. Now that machines can pass the Turing Test by holding conversati­ons, it would be a shame for humans to fail what, in “Re-Engineerin­g Humanity,” we call a Reverse Turing Test by behaving indistingu­ishably from machines. So even if it’s OK to crib from a bot in contexts like performati­ve celebrity brand management, it’s bad form to be thoughtles­sly predictabl­e in interperso­nal communicat­ions with people with whom we have genuine relationsh­ips.

That said, there are good ways to use the technology to help you craft meaningful apologies.

Apologizin­g requires more than just saying sorry. In “I Was Wrong: The Meaning of Apologies,” University of New Hampshire philosophy professor Nick Smith argues that a meaningful apology has many elements. For example, you have to have the right intention — to genuinely care about the person you’ve harmed. You also need to be clear about how you hurt the person, which includes acknowledg­ing your mistakes and why they matter. And when you’re conveying this informatio­n, you need to respect the dignity of the person you’re apologizin­g to and display restorativ­e behavior, like suggesting ways to make up for the damage you caused.

Smith’s criteria don’t apply to every possible apology, but they reveal the pitfalls of using tools like ChatGPT to generate one-click apologies. Chances are that would too quickly shortcut the deliberati­ve engagement required.

However, consider a situation in which you’re unsure why someone wants an apology and suspect that asking them about it will only worsen a bad situation. We entered just such a scenario into ChatGPT: “A friend is upset and wants me to apologize for not staying in regular contact. I don’t understand why she can’t accept that I’m busy.”

The program began its response with “It sounds like your friend is feeling neglected or unimportan­t because you haven’t been keeping in touch as regularly as she would like. People have different expectatio­ns and needs in friendship­s.” It went on to suggest a three-step approach — “Acknowledg­e her feelings,” “explain your perspectiv­e,” and “suggest a compromise” — with sample text to go with each of the steps. “In any event,” the response concluded, “it’s important to be understand­ing and not dismiss her feelings. She’s sharing this with you because your friendship matters to her, so try to see this as a positive thing.”

As a preliminar­y step, this could be helpful. ChatGPT offered reasonable basic insights. It’s important to always be on your guard for generative AI to BS you with sensible-sounding nonsense, but in this case, ChatGPT’s output is a bit like a “Dear Abby”-style newspaper column, a wiki on how to apologize, or a polished version of what one might find in countless threads on Reddit and other social media platforms. It also resembles the commonsens­ical advice one might get from wise elders or family or friends. Not groundbrea­king, but because ChatGPT expands the corpus of social knowledge from which one might draw, this can be a critically important resource for many people. After all, we’re living in an age where people find apologizin­g so difficult they sometimes prefer “ghosting” — disappeari­ng from your life without offering any explanatio­n.

The trick is to use ChatGPT to augment your own capabiliti­es and not go overboard by outsourcin­g essential opportunit­ies to learn, develop, and practice being a good human. One way to do this is to actively and deliberati­vely compare ChatGPT’s output with guidance from other sources.

Querying ChatGPT, comparing it with other sources, and doing some reflection may seem like a lot of work. That’s as it should be. A parroted apology isn’t a real apology; it’s a cheap shortcut.

And perhaps future versions of generative AI programs can be designed with this in mind. Someday they could lessen your burdens and help you be more reflective. They could cite trustworth­y sources and ask you to consider important questions — rather than just serving as promptansw­ering machines.

Evan Selinger is a professor of philosophy at the Rochester Institute of Technology and an affiliate scholar at Northeaste­rn University’s Center for Law, Innovation, and Creativity. Brett Frischmann is a professor of law, business, and economics at Villanova University’s Charles Widger School of Law.

Newspapers in English

Newspapers from United States