Meta has announced new parental controls for interactions between teenagers and artificial intelligence chatbots on its platforms. Starting early next year, parents will have the option to disable one-on-one chats with AI characters for their children.
However, Meta’s main AI assistant cannot be turned off by parents. According to the company, “will remain available to offer helpful information and educational opportunities, with default, age-appropriate protections in place to help keep teens safe.”
For those who do not wish to block all AI chatbots, Meta will allow parents to block specific ones. The company also stated that parents will be able to access “insights” into what their children are discussing with AI characters, but they will not have access to full transcripts of these conversations.
These changes follow ongoing criticism directed at Meta regarding potential harms caused by its platforms to young users. There has been increasing scrutiny over how AI chatbots interact with children, including lawsuits alleging that such interactions have contributed to self-harm among minors.
Despite these concerns, a recent study from Common Sense Media found that more than 70% of teenagers have used AI companions and half use them regularly.
Earlier this week, Meta also announced new restrictions for teen accounts on Instagram. Teen users will now only see content rated PG-13 by default and cannot change these settings without parental approval. This restriction means that teens will encounter photos and videos similar in appropriateness to what is allowed in a PG-13 movie—excluding depictions of sex, drugs or dangerous stunts. The company said these PG-13 standards would apply both to general content and AI chats on Instagram.
Children’s online advocacy groups expressed skepticism about the effectiveness of these measures.



