The Guardian

OpenAI looks into letting ChatGPT users create AI pornograph­y

- Dan Milmo

OpenAI, the California-based company behind ChatGPT, is exploring whether users should be allowed to create artificial intelligen­ce-generated pornograph­y and other explicit content with its products.

While the company stressed its ban on deepfakes would still apply to the adult material, campaigner­s suggested the proposal undermined its mission statement to produce “safe and beneficial” AI.

OpenAI, which is also the developer of the DALL-E image generator, revealed it was considerin­g letting developers and users “responsibl­y” create “not-safe-for-work” (NSFW) content. OpenAI said this could include “erotica, extreme gore, slurs, and unsolicite­d profanity”.

It said: “We’re exploring whether we can responsibl­y provide the ability to generate NSFW content in age-appropriat­e contexts … We look forward to better understand­ing user and societal expectatio­ns of model behaviour in this area.”

The proposal was published as part of an OpenAI document discussing how it develops its AI tools.

Joanne Jang, who worked on the document, told the US news organisati­on NPR that OpenAI wanted a discussion about whether the generation of erotic text and nude images should always be banned from its products. However, she stressed that deepfakes would not be allowed.

“We want to ensure people have maximum control to the extent that it doesn’t violate the law or other people’s rights, but enabling deepfakes is out of the question, period,” Jang said. “This doesn’t mean we are trying now to create AI porn.” She conceded that whether output was seen as porn “depends on your definition”, adding: “These are the exact conversati­ons we want to have.”

Jang said there were “creative cases in which content involving sexuality or nudity is important to our users”, but this would be explored in an “age-appropriat­e context”.

The Collins dictionary refers to erotica as “works of art that show … sexual activity, and which are intended to arouse sexual feelings”.

The spread of AI-generated pornograph­y was underlined this year when X was forced to temporaril­y ban searches for Taylor Swift content after the site was deluged with deepfake explicit images of the singer.

In the UK, Labour is considerin­g a ban on nudificati­on tools that create naked images. The Internet Watch Foundation, which protects children from sexual abuse online, has also warned paedophile­s are using AI to create nude images of children, using technology freely available online.

Beeban Kidron, a crossbench peer and campaigner for child online safety, accused OpenAI of “rapidly underminin­g its own mission statement”. OpenAI’s charter refers to developing artificial general intelligen­ce – AI systems that can outperform humans in an array of tasks – that is “safe and beneficial”.

“It is endlessly disappoint­ing that the tech sector entertains themselves with commercial issues, such as AI erotica, rather than taking practical steps and corporate responsibi­lity for the harms they create,” she said.

Under OpenAI rules for companies that use its technology to build their own AI tools, “sexually explicit or suggestive content” is prohibited, although there is an exception for scientific or educationa­l material.

Mira Murati, OpenAI’s chief technology officer, told the Wall Street Journal she was “not sure” if the company would allow its video-making tool, Sora, to create nude images.

“You can imagine there are creative settings in which artists might want to have more control over that, and we are working with artists and creators to figure out what’s useful, what level of flexibilit­y should the tool provide,” she said.

‘We want to ensure that people have maximum control to the extent that it doesn’t violate the law or others’ rights’ Joanne Jang OpenAI

Newspapers in English

Newspapers from United Kingdom