OpenAI is deploying a new age prediction system across ChatGPT consumer plans to better identify and protect teenage users.
Quick Summary – TLDR:
- OpenAI has begun rolling out an age prediction model to flag under-18 users.
- Accounts identified as minors will automatically receive extra content safeguards.
- Users wrongly flagged can verify their age with a selfie using Persona.
- The move comes amid growing regulatory pressure and safety concerns.
What Happened?
OpenAI announced it is rolling out an AI-based age prediction system on ChatGPT to help detect whether a user might be under 18. If so, ChatGPT will automatically activate protections to reduce their exposure to sensitive content. This update follows months of public scrutiny and regulatory attention over how tech platforms handle teen safety.
We’re rolling out age prediction on ChatGPT to help determine when an account likely belongs to someone under 18, so we can apply the right experience and safeguards for teens.
— OpenAI (@OpenAI) January 20, 2026
Adults who are incorrectly placed in the teen experience can confirm their age in Settings > Account.…
OpenAI’s New AI Tool to Identify Minors
In a bid to bolster safety for younger users, OpenAI is now using an age prediction model that relies on a mix of account-level and behavioral data. This includes:
- How long the account has existed.
- The user’s self-declared age.
- Typical times of activity.
- Usage patterns over time.
The age prediction tool has already begun rolling out and will expand to the European Union in the coming weeks to comply with regional requirements.
When the model determines an account likely belongs to someone under 18, ChatGPT applies a set of safeguards automatically, without requiring the user to declare their age during signup. These protections include limiting exposure to:
- Graphic violence or gory content.
- Viral challenges that encourage risky behavior.
- Sexual, romantic, or violent role play.
- Self-harm references.
- Content promoting extreme beauty standards or body shaming.
These safety measures are informed by academic research on adolescent development, especially in areas like risk perception, peer influence, and impulse control.
Identity Verification with Persona
For users incorrectly identified as under 18, OpenAI provides a workaround. They can verify their age using Persona, a selfie-based identity service also used by companies like Roblox. Once verified, full access to ChatGPT features is restored.
Users can also review if safeguards have been applied by checking under Settings > Account in the ChatGPT interface.
Ongoing Work and Parental Controls
This rollout builds on OpenAI’s Teen Safety Blueprint and the company’s Under-18 Principles for Model Behavior, which aim to ensure young people can benefit from technology while staying safe.
Earlier in 2023, OpenAI also introduced parental controls for ChatGPT. These include features such as:
- Setting quiet hours when ChatGPT cannot be used.
- Managing memory and model training settings.
- Alert notifications if signs of distress are detected in a teen’s usage.
Additionally, OpenAI has formed a council of eight experts to advise on how AI could impact teen mental health, emotions, and motivation.
The company says it is continually refining its model to improve age detection and will update users as progress is made.
SQ Magazine Takeaway
Honestly, I think this is a smart and long-overdue move. The internet is a complicated space for teens, and AI chatbots like ChatGPT have only made it more nuanced. OpenAI taking real steps to proactively protect young users shows they are listening to concerns and trying to strike a balance between tech freedom and safety. The ability to quickly verify age through Persona also keeps the system from being too rigid or frustrating for adults. This rollout may not solve every issue, but it’s a solid foundation toward making generative AI safer for everyone.