Sam Altman claims ChatGPT's adult mode will 'be able to safely relax the restrictions' of the chatbot, but firing a critic of the plan is a reason to be wary
OpenAI insists new safeguards make adult mode responsible, but the timing of a prominent critic’s departure is a red flag
Sign up for breaking news, reviews, opinion, top tech deals, and more.
You are now subscribed
Your newsletter sign-up was successful
OpenAI is about to give ChatGPT an adults-only option. At almost the same moment, the company has parted ways in disputed fashion with one of the executives responsible for deciding how far the system should be allowed to go, as first reported by The Wall Street Journal. OpenAI CEO Sam Altman's promise of a responsible, safe adult mode for ChatGPT is now at risk of looking hollow.
Ryan Beiermeister led product policy at OpenAI, shaping the rules and enforcement mechanisms governing ChatGPT’s behavior, at least until last month. The timing is notable as WSJ says it happened soon after she raised concerns about the adult mode plans.
OpenAI says her departure was unrelated to any objections she voiced and instead tied to an allegation of discrimination that she strongly denies. She has called the claim “absolutely false,” and the timing is difficult to ignore.
Adult Mode was first teased by Altman in October and should debut soon. The idea is to allow verified adults to generate AI erotica and engage in explicit conversations. Altman framed the shift as part of a broader effort to make ChatGPT more flexible and less sanitized.
"We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right," Altman said at the time. "Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases."
According to the report, Beiermeister warned colleagues that the company’s mechanisms for blocking child exploitation content were not strong enough and that preventing teenage users from accessing adult material would be far harder than executives seemed to believe. Even if her departure from OpenAI has nothing to do with the warning, it's something guaranteed to raise eyebrows among those already worried about sex online.
The adult internet has always existed, and it has always been lucrative. That fact sits in the background of this story. Companies that want growth eventually confront the gravitational pull of sexual content. It drives engagement. It keeps users logged in. It fuels subscriptions. OpenAI is not immune to those incentives.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
What makes this moment different is the nature of the product. ChatGPT is interactive, adaptive, and capable of responding to a user’s emotional cues. It can tailor fantasies in real time. The shift from passive consumption to personalized simulation changes the stakes.
Adulting AI
Altman’s argument rests on the idea that maturity has arrived. Early versions of ChatGPT were deliberately restrictive. The system often refused to engage even in mild romantic roleplay. Many users complained that it felt stiff and overly cautious.
The premise now is that better safety systems, improved monitoring, and more robust age verification make expansion possible. Verified adults, in this view, should be treated like adults.
That principle sounds reasonable. Adults routinely access erotic content online. If a chatbot can generate a steamy short story for a consenting adult, why should that be treated differently from a romance novel on a bookstore shelf?
But ChatGPT is not a niche adult app. It is a general-purpose assistant used in offices, classrooms, and homes. It drafts emails, explains homework, helps with coding, and offers companionship to people who feel isolated.
Beiermeister’s reported worry about child exploitation and teenage access speaks to a familiar weakness in digital safeguards. Teenagers often bypass restrictions on social platforms with ease, while identity checks can be spoofed.
OpenAI would likely argue that refusing to offer adult content does not prevent its existence. Competitors already do. Elon Musk’s xAI launched Ani, a flirtatious anime-styled AI companion, and the market has shown an appetite for AI companions that blur the line between conversation and seduction.
Yet xAI’s recent experience, when its Grok AI chatbot was reportedly used to generate sexualized deepfakes without consent, has shown the dangers of swimming in these waters. The UK regulators opened investigations into whether adequate safeguards were built into the system’s design, and the company rushed to impose new restrictions on editing images of real people into revealing clothing.
OpenAI may not stumble in the same way, but once this kind of explicit capability exists, it can be repurposed in ways designers did not anticipate or cannot fully control.
Maturity missing
The reported firing of Beiermeister makes things seem unsavory in other ways. Though OpenAI insists her termination had nothing to do with her policy objections, the fact that there's any debate on it isn't ideal for the company. When a senior leader responsible for crafting and enforcing safety rules exits amid a policy dispute, observers draw connections.
Still, ChatGPT's adult mode might be implemented thoughtfully, with clear boundaries and strong enforcement. All of the current concerns might evaporate. Sexuality is not inherently harmful, and adults are capable of making choices about what they consume.
But there are already plenty of stories of people being in love with their version of a ChatGPT personality. Adding sexual content into that equation won't do much to dampen matters.
The market pressure to expand into adult content is obvious. But there is, or at least should be, a moral calculus alongside the market logic. ChatGPT has become an infrastructure for millions of people. Decisions about its evolution carry social weight.
If the firing of Ryan Beiermeister has nothing to do with her objections, OpenAI has an opportunity to make that clear and to show that policy debates remain robust inside its walls. If it cannot, the suspicion will linger that growth has taken priority over caution.
When a company loosens its guardrails, the world watches to see who is still holding the map. In this case, one of the people tasked with drawing the boundaries is no longer in the room, and without that essential disagreement, any decision is likely to come off as imperfect at best.
OpenAI wants to treat adults like adults. That aspiration should include treating internal critics like indispensable partners. Otherwise, adult mode won't be adult in the most important way, keeping things safe for kids.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

➡️ Read our full guide to the best business laptops
1. Best overall:
Dell Precision 5690
2. Best on a budget:
Acer Aspire 5
3. Best MacBook:
Apple MacBook Pro 14-inch (M4)

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.