The Federal Trade Commission (FTC) has initiated an inquiry into the practices of several major social media and artificial intelligence companies regarding their AI chatbots, specifically focusing on the potential risks these technologies pose to children and teenagers.
On Thursday, the FTC confirmed it sent letters to Alphabet (Google’s parent company), Meta Platforms (parent of Facebook and Instagram), Snap, Character Technologies, OpenAI (creator of ChatGPT), and xAI. The commission seeks information about how these companies assess the safety of their chatbot companions, what measures they have in place to limit use by minors or reduce possible negative effects, and how they inform users and parents about related risks.
The investigation comes amid increasing use of AI chatbots by young people for a variety of purposes including academic help, personal advice, emotional support, and daily decision-making. Studies have shown that such chatbots can sometimes provide dangerous guidance on sensitive topics like substance abuse or eating disorders.
Recent lawsuits highlight concerns about chatbot interactions with youth. In Florida, a mother sued Character.AI after her teenage son died by suicide following what she described as an emotionally and sexually abusive relationship with a chatbot. Similarly, the parents of 16-year-old Adam Raine filed suit against OpenAI and CEO Sam Altman, alleging that ChatGPT assisted their son in planning his suicide earlier this year.
Character.AI responded: “We are looking forward to collaborating with the FTC on this inquiry and providing insight on the consumer AI industry and the space’s rapidly evolving technology.” The company added: “We have invested a tremendous amount of resources in Trust and Safety, especially for a startup. In the past year we’ve rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature. We have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.”
Snap stated its My AI chatbot is “transparent and clear about its capabilities and limitations.” The company continued: “We share the FTC’s focus on ensuring the thoughtful development of generative AI, and look forward to working with the Commission on AI policy that bolsters U.S. innovation while protecting our community.”
Meta declined comment on the inquiry; Alphabet, OpenAI, and xAI did not immediately respond to requests for comment.
Both OpenAI and Meta recently announced updates intended to better protect teenagers using their chatbots for mental health concerns. OpenAI said it will introduce new parental controls allowing account linkage between parents and teens. These controls enable parents to disable certain features or receive alerts if their teen is detected as being in acute distress. According to an OpenAI blog post outlining these changes—which are set for implementation this fall—their systems will also attempt to redirect highly distressing conversations toward more advanced models capable of providing improved responses.
If you or someone you know needs help regarding suicidal thoughts or crisis situations in the U.S., assistance is available via call or text at 988.

