ChatGPT’s new age-detection feature is misfiring — and adult users are getting stuck in teen mode

OpenAI
(Image credit: Getty Images/ SOPA Images )

  • ChatGPT has rolled out its new age prediction feature globally ahead of its new Adult Mode launch
  • Some adults are getting tagged as teens
  • Frustrated users are concerned that overcoming inaccurate restrictions invades their privacy

ChatGPT's new age-prediction AI model is rolling out globally, but is coming across as a little overzealous in its attempts to spot who is under 18 to set the "teen mode" content filters automatically.

The goal of using AI to identify underage users and slot them into their own version of the AI chatbot has its appeal, especially with ChatGPT's adult mode due to arrive soon. OpenAI's belief is that its AI models can infer a user’s likely age based on behavior and context.

But it seems that ChatGPT isn't only applying protective measures to users under 18. More than a few adult subscribers have found themselves reduced to talking to the teen mode version of ChatGPT, with restrictions keeping them from engaging in more mature topics with the AI. It's been an ongoing issue since OpenAI began testing the feature a couple of months ago, but that hasn't prevented the wider rollout.

The technical side of this feature is murky. OpenAI says the system uses a combination of behavioral signals, account history, usage patterns, and occasionally language analysis to make an age estimate. In cases of uncertainty, the model errs on the side of caution. In practice, this means newer accounts, users with late-night usage habits, or those who ask about teen-relevant topics may find themselves swept up in the safety net even if they’ve subscribed to the Pro version of ChatGPT for a long time.

AI ID confirmation

On the surface, it seems like a classic case of good intentions meeting blunt implementation. OpenAI clearly wants to create a safer experience for younger users, especially given the tool’s growing reach in education, family settings, and teen creative projects.

For users flagged incorrectly, the company says it’s easy to resolve. You can confirm your age through a verification tool in Settings. OpenAI uses a third-party tool, Persona, which in some cases may prompt users to submit an official ID or a selfie video to confirm who they are. But for many, the bigger issue isn’t the extra click. It’s that they’re being misread by a chatbot, and have to give more personal details to beat the accusation.

Asking for ID even if it's optional and anonymized raises questions about data collection, privacy, and whether this is a backdoor to more aggressive age verification policies in the future. Some users now believe OpenAI is testing the waters for full identity confirmation under the guise of teen safety, while others worry the model could be trained in part on their submitted materials, even if the company insists it is not.

"Great way to force people to upload selfies," one Redditor wrote. "If [OpenAI] ask me for a selfie, I'll cancel my subscription and delete my account," another wrote. "I understand why they're doing this, but please find a less invasive way. "

In a statement on its help site, OpenAI clarified that it never sees the ID or image itself. Persona simply confirms whether the account belongs to an adult and passes back a yes or no result. The company also says all data collected during this process is deleted after verification, and the only goal is to correct mistaken classification.

The tension between OpenAI's goal of personalized AI and overlaying responsive safety mechanisms that don't alienate users is on full display. And it might not satisfy everyone with its explanations about how much it can infer about someone based on behavioral signals.

YouTube, Instagram, and other platforms have tried similar age estimation tools, and all have faced complaints from adults accused of being underage. But with ChatGPT now a regular companion in classrooms, home offices, and therapy sessions , the idea of an invisible AI filter suddenly using kid gloves feels especially personal.

OpenAI says it will continue refining the model and improving the verification process based on user feedback. But the average user looking for wine pairing ideas who gets told they're too young to drink might just quit ChatGPT in annoyance. No adult will be happy to be mistaken for a child, especially by a digital robot.


Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Purple circle with the words Best business laptops in white
The best business laptops for all budgets
TOPICS
Eric Hal Schwartz
Contributor

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.