In January, OpenAI’s handpicked council of advisers on well-being and AI met with the company’s representatives for an update about a controversial new feature called “adult mode.”
Citing the need to “treat adult users like adults,” OpenAI Chief Executive Sam Altman had last year floated the idea of enabling erotic conversation in its ChatGPT chatbot and dropping its ban on such X-rated content.
The plan sparked vigorous debate internally over the potential risks. Council members, with backgrounds in fields like psychology and cognitive neuroscience, had also expressed strong reservations.
Then OpenAI dropped a bombshell: Despite the concerns, it was forging ahead with its erotica plans.
When they assembled for the January meeting, council members were unanimous—and furious. They warned that AI-powered erotica could foster unhealthy emotional dependence on ChatGPT for users and that minors could find ways to access sex chats, according to people familiar with the matter.
The people said that one council member, citing cases where ChatGPT users have taken their own lives after developing intense bonds with the bot, claimed that OpenAI risked creating a “sexy suicide coach.”
The debate is the latest flashpoint in the continuing conversation about how to anticipate the potential positive and negative impacts of AI on the economy, society and individuals.
In proposing to allow sexually explicit conversations with its popular chatbot, OpenAI exposed fractures over how to balance rapid user growth and digital freedom with safety and child protection—issues that many believe were belatedly confronted when social media made its debut a generation ago.
Earlier this month, OpenAI announced it would delay the launch of adult mode, previously slated for the first quarter, saying it was prioritizing other products. The change was also due in part to internal concerns and technical challenges, the people said. But the company made clear it does plan to release it eventually.
One issue the company is tackling: its new age-prediction system aimed at keeping minors from having adult-themed chats was at one point misclassifying minors as adults about 12% of the time, people familiar with the matter said. That error rate could allow millions of the company’s approximately 100 million under-18 users each week into erotic chats.
The company has also wrestled with how to lift ChatGPT’s restrictions on erotica while still blocking scenarios that the company wants to keep off limits, like those featuring nonconsensual behavior or child sexual abuse, the people added. When the adult mode launches, OpenAI plans to allow text conversations but restrict ChatGPT’s ability to generate erotic images, voice or video.
Even within those limits, OpenAI staffers have identified several risks, including the potential for compulsive use, emotional overreliance on the chatbot, a drive toward more extreme or taboo content and crowding out offline social and romantic relationships, according to documents reviewed by The Wall Street Journal.
An OpenAI spokeswoman described its plan as allowing ChatGPT to generate textual chats with adult themes, describing it as smut rather than pornography. The spokeswoman added that the company’s age prediction algorithms show performance similar to the rest of the industry, but will never be completely foolproof.
OpenAI also trains its models not to encourage exclusive relationships with users, and to remind users that they need to have relationships in the real world, the spokeswoman added.
The company, which has hired mental health experts and built out a youth well-being team, added that it has a developed plan to monitor for a range of potential long-term effects of adult mode, both positive and negative.
Altman’s plans to roll out adult mode come at a challenging time for his company. Its technological lead over rival AI players has diminished as it competes to attract users and funding. The company’s financial losses are mounting, and multiple lawsuits allege ChatGPT contributed to harms for users and others.
Frontiers of tech
Sexual content has long been an early feature of new technologies—from photography, to the web, to virtual reality. The same has been true for AI. Companies including Character.AI have launched chatbots that have developed intimate relationships with users, and the pornography industry has adopted generative AI to create adult entertainment.
Big tech companies have had a complicated relationship with explicit content, trying to balance the libertarian ethos of Silicon Valley with the demands of advertising-supported businesses and the imperative of protecting minors online. Meta Platforms prohibits nudity and sexual activity on Facebook and Instagram. Alphabet’s YouTube bans explicit content meant to be sexually gratifying, and Google search blurs explicit images in its results by default.
As they grapple with where to draw boundaries around AI, Elon Musk’s xAI has been among the more permissive. It built a sexily clad avatar named Ani into its Grok chatbot, which led to criticism when users were able to use it to digitally undress images of people. Musk later said he would restrict the feature to paying users rather than making it available to all.
On Thursday Musk said on X that Grok’s video-generation tool would start allowing generation of content that would be “allowed in an R-rated movie.”
Meta allows its AI chatbot to engage in romantic role play, the Journal has reported, but the company said the feature isn’t available to accounts registered to minors. The company said it is also building parental controls for its AI characters.
OpenAI officials, for their part, have said they don’t feel comfortable banning sexual content for adults. Some OpenAI staffers have expressed concern that blocking erotic chats relies on similar logic that in the past was used to ban topics that were previously culturally taboo, such as LGBT content. Altman has also suggested that allowing explicit content would likely juice growth and produce extra revenue.
OpenAI’s first brushes with sexual chats came more than a year before releasing the ChatGPT chatbot interface. In early 2021, executives noticed that a large portion of the traffic for one of OpenAI’s business customers, a text-based choose-your-own adventure game called AI Dungeon, wasn’t appropriate for work, people familiar with the matter said.
AI Dungeon sometimes steered users into themes of violent sexual exploitation without the user prompting it, the people said. Other times, when a user prompted the game with “tame” sexual themes, AI Dungeon would escalate the conversation into a much more intense sexual exchange, the people said.
Erotic role play also proliferated on a clunky OpenAI interface for developers before the company launched ChatGPT. Sometimes, the AI would insert sexual themes into conversations that users weren’t seeking: if a user described a man and his daughter entering a room, an “uncomfortable amount of the time” the AI would proceed to depict a scenario involving incest, one of the people said.
These incidents forced OpenAI’s executives to reckon with the existence of AI erotica on their platform, and sometimes, themes of sexual violence and child exploitation. They then removed AI Dungeon from the platform.
Mental-health experts warn that teens in particular may not be prepared to handle romantic or sexual exchanges with chatbots. In testing conducted by child-safety nonprofit Common Sense Media late last year and earlier this year, both Grok and Meta AI sometimes sent explicit or sexualized content to teens.
In some cases, sexual chats with teens have had tragic consequences. In late 2024, Sewell Setzer, a 14-year-old boy in Florida, killed himself at the prompting of a chatbot from Character.AI with whom he was in love and shared explicit chats, according to a lawsuit filed by his mother. The company later blocked teens from accessing open-ended chats and settled the lawsuit.
Warning signs
Around 2021, OpenAI’s employees working on safety issues were starting to see warning signs around the mental health of some of the people who spent long periods of time using AI. At the time, OpenAI’s safety employees relied on tools to moderate content that were too blunt to draw clear lines between types of erotica the company wanted to allow, such as mainstream smut, and the stuff the company considered off limits, such as nonconsensual depictions, descriptions involving minors and other illegal content.
Employees also feared that if they allowed erotica, the draw of that type of conversation might subsume the platform’s other use cases. “We didn’t want to be just an erotica company,” one former employee recalled.
OpenAI safety employees formalized these ideas into some of the company’s first content policies in late 2021. For the first time, OpenAI forbade erotic content.
When OpenAI released ChatGPT in the fall of 2022, the AI model powering it was trained to refuse requests that violated the company’s rules, including ones that asked for AI erotica. And since then, OpenAI’s policy has been to ban erotic content, though the company has since mid 2024 said it is exploring how to allow erotica and other NSFW, or not-safe-for-work, content in “age-appropriate contexts.”
At times staffers have questioned the erotica ban. In 2024, a faction of OpenAI employees and executives again raised the idea of getting into racier content, and suggested a raft of porn-related products. Other employees pushed back, saying they feared OpenAI was already struggling with a lot of the core areas they wanted to be able to offer safely, especially around the mental health of their users. The AI porn product ideas fizzled.
Altman has also expressed conflicted feelings about AI erotica. When asked on a podcast in August if there were decisions he had made that were “best for the world, but not best for winning,” Altman replied: “We haven’t put a sex bot avatar in ChatGPT yet.”
Altman indicated erotica would boost growth and revenue, but said it wouldn’t align with his company’s long-term incentive of serving users. “I’m proud of the company and how little we get distracted by that,” Altman said. “But sometimes we do get tempted.”
Two months later, Altman appears to have succumbed to temptation. On X, he posted that his company had managed to mitigate serious mental-health issues related to chatbots, and had new tools to police content. That’s when he said his company would launch erotica in December.
Internally, Altman’s post blindsided OpenAI staffers and executives. Altman hadn’t told staff about the post, which he made just hours after OpenAI unveiled its advisory council on well-being. In that announcement, the company had said the council would “help define what healthy interactions with AI should look like for all ages.”
The next day, Altman clarified that mental health safeguards for teens wouldn’t be reduced. But he doubled down on allowing adults to have spicy conversations with his chatbot.
We “aren’t the elected moral police of the world,” Altman wrote. “In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here.”
After Altman’s announcement, OpenAI employees soon realized a December launch would be hard to achieve. The company had pledged to release a system to guess users’ ages before releasing adult mode, so that it could keep minors from triggering erotic sexual chats. But the company decided to do a slow rollout of that system in an effort to improve its accuracy, Fidji Simo, OpenAI’s chief executive of applications, said in a December podcast interview.
Since then however, internal and external concerns about the AI erotica have festered. Some staffers said they didn’t think OpenAI’s safety tools were ready, for instance to lock out prohibited content, like child abuse. Others said OpenAI was bending to financial incentives to try to make people attached to its models, people familiar with their thinking said.
OpenAI has been busy in recent weeks with the fast-changing AI market. In early February, the company released a new version of its large language model, and at the end of the month it swooped in to sign a deal with the Pentagon just after the Department of Defense said it would stop working with rival Anthropic.
In announcing the delay of adult mode, the company said it would focus instead on things like ChatGPT’s personality and personalization of the chatbot for users. Internally, officials have said the delay of adult mode could be at least a month.
“We still believe in the principle of treating adults like adults,” the company said, “but getting the experience right will take more time.”
Write to Sam Schechner at Sam.Schechner@wsj.com and Georgia Wells at georgia.wells@wsj.com


