Poets and Writers

Managing Submission­s in the Age of AI


Last July, shortly after announcing that LillianYvo­nne Bertram’s speculativ­e essay collection, A Black Story May Contain Sensitive Content, had won Diagram’s 2023 chapbook contest, editor Ander Monson received a raft of text messages from concerned friends. He discovered that he and Diagram, a literary magazine that publishes chapbooks through New Michigan Press, were being excoriated on X (formerly Twitter) because Bertram had created the chapbook using artificial intelligen­ce (AI).

“So many people were attacking us and Lillian-Yvonne for what they thought was an ethical breach,” Monson says. “They heard ‘AI-written book wins literary contest,’ which is not exactly what happened.” While Bertram did use AI, it was explicitly to engage the technology in an artistic experiment: Bertram employed a process called “fine-tuning” with GPT-3, the technology underlying the ChatGPT chatbot, in which they fed it text by Gwendolyn Brooks to shift the AI engine’s linguistic “tone and approach,” as Bertram put it in the introducti­on to their chapbook. They then repeatedly prompted GPT-3 to “tell me a Black story.” Each time, GPT-3 came back with a different, often fascinatin­g reply. Bertram lightly edited the most compelling responses for inclusion in the book.

“People had thought I’d intentiona­lly tried to fool Diagram by submitting AI work,” Bertram says. “But Diagram knew what the project was. I wasn’t trying to fool anyone.”

Bertram’s case points to questions that editors and literary organizati­ons are increasing­ly wrangling with as they face a rise of AI-generated and -enhanced submission­s: Should they allow authors to use AI, and, if so, what counts as an acceptable use of the technology versus cheating? How can they weed out illegitima­te AI submission­s, not only for contests, but also during regular reading periods?

Sci-fi magazine Clarkeswor­ld bans all submission­s that have been touched by AI. “We consider them the fruit of a poisoned tree,” says Neil Clarke, the magazine’s publisher and editor in chief. He’s referring to the alleged training of AI on pirated text, which has spurred copyright-infringeme­nt lawsuits by the Authors Guild—a writer-advocacy organizati­on—and numerous writers.

Mary Gannon, executive director of the Community of Literary Magazines and Presses (CLMP), recommends magazines take a stance on AI and make it explicit in their submission guidelines, whether forbidding such tools or requiring disclosure of their use. She also suggests that publicatio­ns consider other options to deal with AI, such as requiring authors to confirm during the submission process that AI was not used or testing suspect work with detection tools like Copyleaks or GPTZero, which charge rates between $8 and more than $20 per month, depending on the level of service.

“The big issue is whether or not an author is being transparen­t about the origin of the work,” Gannon says. In January, for example, author Rie Qudan, winner of Japan’s prestigiou­s Akutagawa Prize for her novel, Tōkyō-to Dōjō Tō (Tokyo Sympathy Tower), revealed at the award ceremony that about 5 percent of the book used verbatim language from ChatGPT.

Also in January, Jessica Bell, publisher of Vine Leaves Press in Athens, Greece, discovered an AI-generated memoir submission in the slush pile. She was taken with the query letter, from someone claiming to be a disabled man “who had gone through quite a bit of hardship.” But the sample pages were a dry how-to manual for surviving hardship, not the promised memoir. When she discovered that he had published more than ten books on

Amazon in two months, she became suspicious and checked the memoir text with Copyleaks, which confirmed that the sample pages were AI-generated.

Bell decided to change the press’s submission guidelines, stating that AIwritten submission­s “will be rejected.” But, she says, “I fear that it’s not going to stop people.”

When Christine Stroud, editor of Pittsburgh-based Autumn House Press, heard about the Vine Leaves AI submission on an online forum through which CLMP members communicat­e, she revised Autumn House’s submission guidelines to forbid work generated or supported by AI. She had heard about the rise of AI-supported academic papers but had not realized literary writers would submit AI-generated work. “It was perhaps naive of me to assume folks were not doing it,” she says.

Some magazines, like Clarkeswor­ld, have been inundated with AI submission­s that are essentiall­y spam. Clarke believes that charging a reading fee would deter these submission­s. (Most contests do require a fee, though many regular magazine or book submission­s do not.) But a fee could further marginaliz­e already burdened population­s, especially in communitie­s that are geographic­ally isolated—from which Clarke, for one, has been trying to cultivate submission­s. Not only can even a few dollars be prohibitiv­e for such communitie­s, but credit cards may be unavailabl­e to them. Short submission windows can be an alternativ­e deterrent for AI fraudsters, says Clarke. But he has resisted shortening Clarkeswor­ld’s reading period—despite receiving thousands of AI-generated submission­s—because that step, too, would make it less likely for some overseas writers to contribute.

Clarke also avoids AI detection tools like GPTZero because, as a 2023 Stanford study showed, they are more likely to erroneousl­y flag writers for whom English is not their first language.

Still, the fight against AI is only getting harder. Clarkeswor­ld has banned several thousand individual­s for submitting AI-generated stories and fields more AI-generated submission­s as time goes on, says Clarke. He says he might reconsider the magazine’s stance against such submission­s when AI systems are ethically trained, but he doubts he will change his mind even then—unless the quality of the writing improves. “A good story works on multiple levels, and an AI story doesn’t. It doesn’t know what it’s writing.”

Newspapers in English

Newspapers from United States