The Washington Post

AI chatbots’ great talent is flooding inboxes

-

Was it really only December when I first heard, at a conference, buzz about the new AI chatbot that was going to change the world? Usually, that sort of talk means there’s a good chance that, in a couple of years, I might discover some mildly useful new service. But in less than three months, CHATGPT and its near relations really have changed my world.

Bing, Microsoft’s search engine, is adding chat features, and I’m using a different engine to do literature reviews. Professor friends are being flooded with machine answers on assignment­s and thinking about how to redesign coursework to make it unhackable. And the machines are already nibbling around the edges of my profession: Reuters reports that AIgenerate­d books are popping up on Amazon, while the science-fiction magazine Clarkeswor­ld just announced that it would temporaril­y close submission­s because the slush pile was overwhelme­d with machine-manufactur­ed dreck.

This is a major problem, though not exactly the one you might think I’d be complainin­g about: I’m not worried that artificial intelligen­ce is coming for my job. Indeed, as I wrote a few months back, in the short term, I expect that AI will actually be good for establishe­d writers and outlets, precisely because it generates so much bad writing.

The productivi­ty of these AIS is astounding; in a few minutes they can pound out a thousand words that would have taken a human hours to write. But luckily, for those of us who already have jobs, AI quality is astounding­ly bad. CNET and Men’s Journal experiment­ed with Ai-generated articles, only to find that they were riddled with errors, because AI doesn’t know or care what is true; it knows only what sort of thing its prediction engine tells it ought to come next in a sentence or paragraph.

Unscrupulo­us people will nonetheles­s be happy to swamp the internet with this garbage, in hopes of attracting reader eyeballs long enough to sell ads. Readers drowning in unreliable ersatz content will probably learn to place more value on journalist­ic brand names with reputation­s for accuracy to defend. Our biggest problem, in the short term, is likely to be akin to what Clarkes world is facing: Publicity agents armed with AIS and mailing lists will stuff our inboxes with even more inappropri­ate pitches.

Yet if AI isn’t truthful enough to do good journalism, neither is it a good enough liar to write good fiction, as best-selling science fiction author John Scalzi pointed out on his blog. Current versions have no creative spark or deep understand­ing of human motivation­s; they serve up warmed-over pastiches of better authors, rendered in a prose style that seems to have been picked up from databases of regulatory filings.

What, then, is the problem? Well, for one thing, this will make it harder for fiction and nonfiction outlets to find new talent. The internet created a lot of new pathways to success for nontraditi­onal writers — 20 years ago, for instance, blogs helped me break into journalism, and Scalzi to break into fiction writing. Other writers have found success self-publishing on Amazon. But none of us had to swim through a boundless sea of Ai-generated nonsense to reach editors or readers.

In the longer term, I confess, I am less optimistic than Scalzi, who believes that “they just don’t have what it takes” to do his job, “and short of actual consciousn­ess in the AI, may not ever.” AIS aren’t human (notwithsta­nding the lovelorn AI who begged a New York Times reporter to ditch his wife and run away with her). But I’m not sure they won’t quickly become very good at emulating humans in all the ways that readers care about.

After all, it takes quite a while for us to learn how to emulate humans. Many of the funny errors made by AI strike me as similar to the funny things my parent friends report their kids saying — like AI, kids know a lot of facts and rules, but don’t necessaril­y have a good mental model for how everything should hang together. As for its larger flaws, even good young writers need time to develop their prose style, or master journalist­ic ethics.

And unlike a young writer, AI can brute-force its way to reader-pleasing output. It can become human — or close enough — in roughly the same way humanity did, through endless evolution, except over the course of hours and days rather than millennia. The machines can test small changes over and over, and over and over and over, keeping what people like, jettisonin­g what we don’t. It may take them a lot of effort to attract sufficient human attention to make a good test. But of course, they’ll never get tired or bored, or decide to give up and go to law school.

I expect this will take some time and, as I say, in the meantime, an establishe­d reputation will only become more valuable. Still, I wonder . . . how much, time, exactly?

Newspapers in English

Newspapers from United States