Report finds major flaws in Instagram’s teen safety measures despite company assurances

Arturo Bejar, Meta employee
Arturo Bejar, Meta employee - Official Website
0Comments

A new report released Thursday criticizes Meta for failing to adequately protect teens on Instagram, despite years of scrutiny from lawmakers, researchers, and parents. The report was authored by former Meta employee and whistleblower Arturo Bejar in collaboration with Cybersecurity For Democracy at New York University and Northeastern University, the Molly Rose Foundation, Fairplay, and ParentsSOS.

The report examined 47 out of 53 safety features that Meta claims are designed to safeguard teens on Instagram. According to its findings, most of these features are either no longer available or ineffective. Some tools reduced harm but had notable limitations; only eight worked as intended without issues. The focus was specifically on Instagram’s design choices rather than content moderation practices.

“This distinction is critical because social media platforms and their defenders often conflate efforts to improve platform design with censorship,” the report states. “However, assessing safety tools and calling out Meta when these tools do not work as promised, has nothing to do with free speech. Holding Meta accountable for deceiving young people and parents about how safe Instagram really is, is not a free speech issue.”

Meta disputed the findings in a statement: “This report repeatedly misrepresents our efforts to empower parents and protect teens, misstating how our safety tools work and how millions of parents and teens are using them today. Teen Accounts lead the industry because they provide automatic safety protections and straightforward parental controls,” the company said. “The reality is teens who were placed into these protections saw less sensitive content, experienced less unwanted contact, and spent less time on Instagram at night. Parents also have robust tools at their fingertips, from limiting usage to monitoring interactions. We’ll continue improving our tools, and we welcome constructive feedback — but this report is not that.”

Meta called the report “misleading” and argued it undermines ongoing discussions about teen safety.

The authors tested Instagram’s safeguards by creating accounts for both teens and adults attempting inappropriate interactions with minors. They found that while some restrictions exist—such as limits on adult strangers contacting underage users—adults could still reach minors through other features like reels or follow suggestions.

“Most significantly, when a minor experiences unwanted sexual advances or inappropriate contact, Meta’s own product design inexplicably does not include any effective way for the teen to let the company know of the unwanted advance,” according to the report.

Instagram promotes disappearing messages to teenagers using animated incentives—a feature criticized in the report as potentially dangerous since it can facilitate illicit activities without recourse for affected minors.

Another tool intended to filter offensive language was found largely ineffective; test messages containing highly offensive language were delivered without warnings or blocks between teen accounts. Meta responded that this tool was only meant for message requests rather than all direct messages.

Although Meta has committed not to recommend harmful content such as posts about self-harm or eating disorders to teens, test accounts reportedly received recommendations including age-inappropriate sexual material as well as violent videos depicting accidents or injuries.

Additionally, children under 13 were found active on Instagram—some reportedly as young as six—and allegedly encouraged by algorithms toward sexualized behavior such as suggestive dancing.

New Mexico Attorney General Raúl Torrez commented: “It is unfortunate that Meta is ‘doubling down’ on its efforts to persuade parents and children that Meta’s platforms are safe—rather than making sure that its platforms are actually safe.”

Recommendations from the authors include regular testing of messaging controls (“red-teaming”), easier ways for teens to report inappropriate conduct in direct messages, public reporting of data regarding teen user experiences on Instagram, ensuring recommendations made to young users remain appropriate for their age group (PG-rated), and soliciting feedback from minors about exposure to sensitive content.



Related

Patti Poppe, Chief Executive Officer at Pacific Gas and Electric Company (PG&E)

PG&E partners with Nissan on vehicle-to-grid demonstration at Redwood Coast Airport Microgrid

Pacific Gas and Electric Company (PG&E), Nissan, Fermata Energy, and the Schatz Energy Research Center at Cal Poly Humboldt have launched a demonstration project in California to test vehicle-to-grid technology.

Joseph B. Edlow, Director, U.S. Citizenship and Immigration Services

How many H-1B petitions approved for employers classified under the Educational Services industry in Fresno Business Daily publication area during 2024?

Of the 16 H-1B petitions filed by an employer classified under the Educational Services industry located across Fresno Business Daily publication area 15 were approved in 2024, as per data provided by the U.S. Citizenship and Immigration Services via the H-1B Employer Data Hub.

Patti Poppe, Chief Executive Officer at Pacific Gas and Electric Company (PG&E)

Chelle Izzi appointed chief commercial officer at Pacific Gas and Electric Company

Pacific Gas and Electric Company (PG&E) has appointed Chelle Izzi as its Chief Commercial Officer, a newly created position.

Trending

The Weekly Newsletter

Sign-up for the Weekly Newsletter from Fresno Business Daily.