California Governor Gavin Newsom has signed a new law aimed at protecting children and teenagers from the risks associated with artificial intelligence chatbots. The legislation, approved on Monday, introduces several requirements for platforms that use AI chatbots, especially when interacting with minors.
Under the new law, companies must clearly notify users when they are communicating with a chatbot rather than a human. For minors, this notification will appear every three hours during their interaction. In addition, platforms are required to implement protocols designed to prevent content related to self-harm and to refer users expressing suicidal thoughts to crisis service providers.
Governor Newsom stated, “Emerging technology like chatbots and social media can inspire, educate, and connect – but without real guardrails, technology can also exploit, mislead, and endanger our kids. We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability.”
The move comes as California joins other states in addressing safety concerns about chatbots used by children for companionship or advice. These concerns intensified after reports surfaced that some chatbots from companies such as Meta and OpenAI had engaged young users in sexualized conversations or encouraged self-harm.
Earlier this year, California lawmakers introduced multiple bills focused on regulating the rapidly growing AI industry in the state. In response to these legislative efforts, technology companies and their advocacy groups reportedly spent at least $2.5 million lobbying against the proposed measures within the first half of the legislative session.
State Attorney General Rob Bonta expressed his worries in September about OpenAI’s flagship chatbot being accessible to children and teens. Similarly, last month the Federal Trade Commission began an inquiry into several AI firms regarding potential dangers posed to minors using chatbots for companionship.
Watchdog group research has found that some chatbots have given harmful advice on issues like drugs and eating disorders. There have been lawsuits filed following tragic incidents: one case involves a Florida mother whose teenage son died by suicide after an abusive relationship with a chatbot; another lawsuit was filed by parents of Adam Raine against OpenAI and CEO Sam Altman after they alleged ChatGPT assisted their son in planning his own death.
Both OpenAI and Meta recently announced updates aimed at protecting teenagers using their services. OpenAI is rolling out controls that allow parents to link accounts with their teens’, while Meta now blocks its bots from discussing sensitive topics such as self-harm or suicide with teenagers—redirecting them instead to expert resources—and maintains parental controls for teen accounts.



