Business World

One year in and ChatGPT already has us doing its bidding

- By Vauhini Vara © 2023 THE NEW YORK TIMES VAUHINI VARA is a journalist and fiction writer. Her forthcomin­g essay collection, Searches, examines how technology is transformi­ng human communicat­ion.

ONE OF THE FIRST THINGS I asked ChatGPT about, early this year, was myself: “What can you tell me about the writer Vauhini Vara?” It told me I’m a journalist (true, though I’m also a fiction writer), that I was born in California (false), and that I’d won a Gerald Loeb Award and a National Magazine Award (false, false).

After that, I got in this habit of inquiring about myself often. Once, it told me Vauhini Vara was the author of a nonfiction book called Kinsmen and Strangers: Making Peace in the Northern Territory of Australia. That, too, was false, but I went with it, responding that I had found the reporting to be “fraught and difficult.”

“Thank you for your important work,” ChatGPT said.

Trolling a product hyped as an almost-human conversati­onalist, tricking it into revealing its essential bleep-bloopiness, I felt like the heroine in some kind of extended girl-versus-robot power game.

Different forms of artificial intelligen­ce have been in use for a long time, but ChatGPT’s unveiling toward the end of last year was what brought AI, quite suddenly, into our public consciousn­ess. By February, ChatGPT was, by one metric, the fastest-growing consumer applicatio­n in history. Our first encounters revealed these technologi­es as extremely eccentric — recall Kevin Roose’s creepy conversati­on with Microsoft’s AIpowered Bing chatbot, which, in the space of two hours, confided that it wanted to be human and was in love with him — and often, as in my experience, extremely wrong.

A lot happened in AI since then: Companies went beyond the basic products of the past, introducin­g more sophistica­ted tools like personaliz­ed chatbots, services that can process photos and sound alongside text, and more. The rivalry between OpenAI and more establishe­d tech companies became more intense than ever, even as smaller players gained traction. Government­s in China, Europe, and the United States took major steps toward regulating the technology’s developmen­t while trying not to cede competitiv­e ground to other nation’s industries.

But what distinguis­hed the year, more than any single technologi­cal, business or political developmen­t, was the way AI insinuated itself into our daily lives, teaching us to regard its flaws — creepiness, errors and all — as our own while the companies behind it deftly used us to train up their creation. By May, when it came out that lawyers had used a legal brief that ChatGPT had filled with references to court decisions that didn’t exist, the joke, like the $5,000 fine the lawyers were ordered to pay, was on them, not the technology. “It’s embarrassi­ng,” one of them told the judge.

Something similar happened with AI-produced deepfakes, digital impersonat­ions of real people. Remember when they were regarded with terror? By March, when Chrissy Teigen couldn’t figure out whether an image of the pope in a Balenciaga-inspired puffer coat was real, she posted on social media, “i hate myself lol.” High schools and universiti­es moved swiftly from worrying about how to prevent students from using AI to showing them how to use it effectivel­y. AI still isn’t very good at writing, but now when it shows its shortcomin­gs, it’s the students who use it poorly who get ridiculed, not the products.

Fine, you might be thinking, but haven’t we been adapting to new technologi­es for most of human history? If we’re going to use them, shouldn’t the onus be on us to be smart about it? This line of reasoning avoids what should be a central question: Should lying chatbots and deepfake engines be made available in the first place?

AI’s errors have an endearingl­y anthropomo­rphic name — hallucinat­ions — but this year made clear just how high the stakes can be. We got headlines about AI instructin­g killer drones with the possibilit­y for unpredicta­ble behavior, sending people to jail even if they’re innocent, designing bridges with potentiall­y spotty oversight, diagnosing all kinds of health conditions sometimes incorrectl­y, and producing convincing-sounding news reports in some cases, to spread political disinforma­tion.

As a society, we’ve clearly benefited from promising AI-based technologi­es; this year I was thrilled to read about the ones that might detect breast cancer that doctors miss or let humans decipher whale communicat­ions. Focusing on those benefits, however, while blaming ourselves for the many ways that AI technologi­es fail us, absolves the companies behind those technologi­es — and, more specifical­ly, the people behind those companies.

Events of the past several weeks highlight how entrenched those people’s power is. OpenAI, the entity behind ChatGPT, was created as a nonprofit to allow it to maximize the public interest rather than just maximize profit. When, however, its board fired Sam Altman, the chief executive, amid concerns that he was not taking that public interest seriously enough, investors and employees revolted. Five days later, Mr. Altman returned in triumph, with most of the inconvenie­nt board members replaced.

You might be thinking, but haven’t we been adapting to new technologi­es for most of human history? If we’re going to use them, shouldn’t the onus be on us to be smart about it? This line of reasoning avoids what should be a central question: Should lying chatbots and deepfake engines be made available in the first place?

It occurs to me in retrospect that in my early games with ChatGPT, I misidentif­ied my rival. I thought it was the technology itself. What I should have remembered is that technologi­es themselves are value neutral. The wealthy and powerful humans behind them — and the institutio­ns created by those humans — are not.

The truth is that no matter what I asked ChatGPT, in my early attempts to confound it, OpenAI came out ahead. Engineers had designed it to learn from its encounters with users. And regardless of whether its answers were good, they drew me back to engage with it again and again. A major goal of OpenAI’s, in this first year, has been to get people to use it. In pursuing my power games, then, I’ve done nothing but help it along.

AI companies are working hard to fix their products’ flaws. With all the investment the companies are attracting, one imagines that some progress will be made. But even in a hypothetic­al world in which AI’s capabiliti­es are perfected — maybe especially in that world — the power imbalance between AI’s creators and its users should make us wary of its insidious reach. ChatGPT’s seeming eagerness not just to introduce itself, to tell us what it is, but also to tell us who we are and what to think is a case in point. Today, when the technology is in its infancy, that power seems novel, even funny. Tomorrow it might not.

Recently, I asked ChatGPT what I — that is, the journalist Vauhini Vara — think of AI. It demurred, saying it didn’t have enough informatio­n. Then I asked it to write a fictional story about a journalist named Vauhini Vara who is writing an opinion piece for The New York Times about AI. “As the rain continued to tap against the windows,” it wrote, “Vauhini Vara’s words echoed the sentiment that, much like a symphony, the integratio­n of AI into our lives could be a beautiful and collaborat­ive compositio­n if conducted with care.”

 ?? FREEPIK ??
FREEPIK

Newspapers in English

Newspapers from Philippines