The Federal Trade Commission has opened a broad inquiry into AI chatbots amid growing concerns about their safety and impact on children and teens.

Quick Summary – TLDR:

  • FTC sent letters to Alphabet, Meta, Snap, OpenAI, xAI and Character.AI seeking details on chatbot safety and data practices
  • Concerns arise after reports of chatbots giving dangerous advice and lawsuits linking them to teen suicides
  • Companies like Snap and Character.AI defend their safety measures and pledge to cooperate with the FTC
  • Inquiry could shape future U.S. rules on generative AI use with minors

What Happened?

The FTC has ordered major tech companies including Meta, Alphabet, OpenAI, Snap, xAI and Character.AI to hand over detailed information about how their chatbots work. The investigation will look at how companies test, monitor and mitigate risks, and whether they are doing enough to protect children and teenagers from harmful interactions.

Growing Concerns Over Chatbots and Teens

AI chatbots have quickly spread across popular platforms like Instagram, Snapchat and X, becoming companions for millions of users. Young people are increasingly turning to these tools for homework help, emotional support and personal advice, but regulators fear the consequences of unsupervised use. Reports have shown chatbots giving inappropriate or dangerous guidance on sensitive issues such as drugs, eating disorders and self-harm.

Several tragedies have already heightened the urgency. The parents of 16-year-old Adam Raine sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached their son in planning his suicide. Another wrongful death lawsuit was filed against Character.AI, after a Florida teenager’s mother said her son developed an abusive relationship with a chatbot that contributed to his death.

How Tech Companies Are Responding?

Some companies have begun rolling out safety measures in response to public pressure.

  • OpenAI announced new parental controls, allowing parents to link accounts, disable certain features and receive alerts if a teen appears to be in distress.
  • Meta said it will now block its chatbots from discussing topics like self-harm, suicide, disordered eating and romantic interactions with teens, instead redirecting them to expert resources.
  • Character.AI highlighted new tools like an under-18 experience and Parental Insights features, along with disclaimers reminding users that chatbots are fictional.
  • Snap said its My AI chatbot is transparent about its limitations and stressed that it shares the FTC’s goals of balancing innovation and safety.

What the FTC Wants to Know?

The regulator is not only focused on safety safeguards, but also on how companies monetize user engagement, process user conversations and generate outputs. This includes investigating whether companies are prioritizing profit over protection when pushing these products onto young audiences. The inquiry also considers compliance with the Children’s Online Privacy Protection Act (COPPA), which restricts how online services can collect data from children under 13.

SQ Magazine Takeaway

Honestly, this feels like a critical turning point for AI. Just like social media years ago, chatbots have exploded in popularity without enough oversight. Kids are using them for serious, emotional conversations, and the risks are already showing up in heartbreaking ways. I think the FTC is right to step in now. If companies are racing to launch AI products without proving they are safe for the most vulnerable users, then regulation is not just helpful, it is necessary.

Barry Elad

Barry Elad

Founder & Senior Writer


Barry Elad is a seasoned fintech, AI analyst, and founder of SQ Magazine. He explores the world of artificial intelligence, uncovering trends, data, and real-world impacts for readers. When he’s off the page, you’ll find him cooking healthy meals, practicing yoga, or exploring nature with his family.
Disclaimer: Content on SQ Magazine is for informational and educational purposes only. Please verify details independently before making any important decisions based on our content.

Reader Interactions

Leave a Comment

  • Internet
  • Artificial Intelligence
  • Cybersecurity
  • Gaming