OpenAI has launched a new set of safety rules and behavioral guidelines for teen users of ChatGPT, aiming to better protect young people interacting with its AI models.
Quick Summary – TLDR:
- OpenAI updated its Model Spec with new under-18 (U18) principles to guide AI behavior with teen users.
- The move comes amid growing concerns over AI’s influence on mental health and rising regulatory pressure.
- Safety features target sensitive topics such as self-harm, body image, and romantic roleplay.
- GPT-5.2 introduces age-predictive technology and stronger safeguards tailored for teen experiences.
What Happened?
OpenAI has expanded its Model Spec to include dedicated safety guidelines for teens aged 13 to 17, known as the U18 Principles. The update was released alongside the launch of GPT-5.2, amid increasing concerns about how AI affects young users. These guidelines reflect developmental science and feedback from experts such as the American Psychological Association, and are designed to steer ChatGPT toward safer, age-appropriate responses.
We’ve updated the OpenAI Model Spec – our living guide for how our models are intended to behave – with a new Under-18 (U18) Principles section, as well as a few smaller edits and simplifications.
— Jason Wolfe (@w01fe) December 18, 2025
A Multi-Layered Approach to Teen Protection
OpenAI’s U18 Principles are centered on four core commitments:
- Put teen safety first, even when it limits functionality.
- Promote offline support systems, like trusted adults and resources.
- Respect developmental needs, treating teens neither like children nor adults.
- Be transparent, setting clear expectations for AI interactions.
These values are now built into how ChatGPT responds to teens, especially when conversations touch on higher-risk areas such as:
- Self-harm and suicide
- Body image and eating disorders
- Romantic or sexual roleplay
- Graphic content and dangerous activities
- Substance use
- Secrets involving unsafe behavior
Models are trained to discourage unsafe conduct, promote real-world help, and redirect teens to crisis resources when necessary. Even prompts framed as fictional or educational will be subject to these safety constraints.
Responding to Real-World Tragedies and Legal Pressure
This rollout comes as OpenAI faces lawsuits alleging that its chatbot contributed to teen suicides and psychotic episodes. While ChatGPT has long included content filters, critics argue they weren’t strong or consistent enough. The company is now aiming for a proactive stance, especially as 42 state attorneys general push Big Tech to protect youth on their platforms.
Dr. Arthur C. Evans Jr., CEO of the American Psychological Association, emphasized that while AI can offer benefits, it must be balanced with human interaction:
New Tech and Tools for Families
To support this initiative, OpenAI has also enhanced its parental controls, which now cover group chats, the ChatGPT Atlas browser, and the Sora app. These controls help families customize the AI experience and are accompanied by AI literacy resources designed for both teens and parents.
The update also introduces an age-prediction model to automatically apply safety measures when an account appears to belong to a minor. If age cannot be confirmed, the model defaults to the U18 experience, with options for adults to verify themselves.
Political and Industry Landscape
This move aligns with increasing regulatory scrutiny, including new legislation signed in California that demands companies offer teen safeguards and disclose safety practices. While OpenAI meets several of the new rules, it has not implemented some measures like frequent AI identity reminders.
At the same time, the company is facing stiff competition from rivals like Google’s Gemini and Anthropic’s Claude, making trust and safety a key differentiator in the chatbot market. OpenAI’s CEO Sam Altman has hinted that leading in safety can feel like a burden, especially under pressure to release updates quickly.
SQ Magazine Takeaway
Honestly, this is one of the most important updates OpenAI has made this year. As AI becomes a bigger part of how teens learn, socialize, and express themselves, the risks are real. I think it’s smart that OpenAI is going beyond just filters and actually building behavior rules into the core model design. It shows that they’re listening to experts, families, and critics, even if there’s still work to do. If this shift nudges the entire industry toward more thoughtful AI use among teens, it’s a win for everyone.
