OpenAI looks into letting ChatGPT users create AI pornography
OpenAI, the California-based company behind ChatGPT, is exploring whether users should be allowed to create artificial intelligence-generated pornography and other explicit content with its products.
While the company stressed its ban on deepfakes would still apply to the adult material, campaigners suggested the proposal undermined its mission statement to produce “safe and beneficial” AI.
OpenAI, which is also the developer of the DALL-E image generator, revealed it was considering letting developers and users “responsibly” create “not-safe-for-work” (NSFW) content. OpenAI said this could include “erotica, extreme gore, slurs, and unsolicited profanity”.
It said: “We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts … We look forward to better understanding user and societal expectations of model behaviour in this area.”
The proposal was published as part of an OpenAI document discussing how it develops its AI tools.
Joanne Jang, who worked on the document, told the US news organisation NPR that OpenAI wanted a discussion about whether the generation of erotic text and nude images should always be banned from its products. However, she stressed that deepfakes would not be allowed.
“We want to ensure people have maximum control to the extent that it doesn’t violate the law or other people’s rights, but enabling deepfakes is out of the question, period,” Jang said. “This doesn’t mean we are trying now to create AI porn.” She conceded that whether output was seen as porn “depends on your definition”, adding: “These are the exact conversations we want to have.”
Jang said there were “creative cases in which content involving sexuality or nudity is important to our users”, but this would be explored in an “age-appropriate context”.
The Collins dictionary refers to erotica as “works of art that show … sexual activity, and which are intended to arouse sexual feelings”.
The spread of AI-generated pornography was underlined this year when X was forced to temporarily ban searches for Taylor Swift content after the site was deluged with deepfake explicit images of the singer.
In the UK, Labour is considering a ban on nudification tools that create naked images. The Internet Watch Foundation, which protects children from sexual abuse online, has also warned paedophiles are using AI to create nude images of children, using technology freely available online.
Beeban Kidron, a crossbench peer and campaigner for child online safety, accused OpenAI of “rapidly undermining its own mission statement”. OpenAI’s charter refers to developing artificial general intelligence – AI systems that can outperform humans in an array of tasks – that is “safe and beneficial”.
“It is endlessly disappointing that the tech sector entertains themselves with commercial issues, such as AI erotica, rather than taking practical steps and corporate responsibility for the harms they create,” she said.
Under OpenAI rules for companies that use its technology to build their own AI tools, “sexually explicit or suggestive content” is prohibited, although there is an exception for scientific or educational material.
Mira Murati, OpenAI’s chief technology officer, told the Wall Street Journal she was “not sure” if the company would allow its video-making tool, Sora, to create nude images.
“You can imagine there are creative settings in which artists might want to have more control over that, and we are working with artists and creators to figure out what’s useful, what level of flexibility should the tool provide,” she said.
‘We want to ensure that people have maximum control to the extent that it doesn’t violate the law or others’ rights’ Joanne Jang OpenAI