Instagram Kids: Devastating Report on Safety Failures
October 20, 2025
Instagram promised parents safety and pledged to protect children from harmful content. However, the latest report reveals something completely different: the Instagram algorithm not only fails to shield young users but actively promotes content that threatens their mental health. The scale of the problem is much larger than Meta is willing to admit, seriously raising concerns about Instagram and children.
The report prepared by Fairplay, the Molly Rose Foundation, and experts from Cybersecurity for Democracy leaves no room for doubt. Researchers tested 47 out of the 53 safety features Meta announced for young Instagram users in 2024. The outcome? Researchers gave 64 percent of the tools a “red” rating—meaning Meta has removed the tools or they are entirely ineffective.
“Meta has repeatedly proven that it simply cannot be trusted,” write Ian Russell and Maurine Molak in the report. These parents of teenagers who took their own lives after encountering toxic content on Instagram state clearly: these were not isolated tragedies; uncontrolled algorithms caused them. Almost ten years have passed since their children’s deaths, and the problem has not only failed to disappear—algorithms still feed content about self-harm, eating disorders, and sexualization to young users.
The researchers created dozens of test profiles designed to mimic real teenagers. These accounts behaved like actual young users—liking, browsing, and following content. The result? The Instagram algorithm surpassed all the worst predictions—despite activated safeguards, the system promoted violence, sexually suggestive content, and materials encouraging self-harm.
Furthermore, the researchers came across a truly alarming phenomenon. Furthermore, the researchers came across a truly alarming phenomenon: The Instagram algorithm rewarded accounts appearing to belong to very young girls, pushing materials that drew dangerous attention from adult users. Videos with seemingly innocent questions like “Am I pretty?” quickly racked up over a million views and thousands of comments, often vulgar in nature. Simple recordings from the same profiles achieved only a fraction of those results. Clearly, the risks associated with Instagram and children are being amplified, not mitigated.
The list of broken features is long. The “Take a Break” tool, which was supposed to remind teenagers to pause, has apparently been removed—even though Meta never announced its removal. Similarly, “Hidden Words,” designed to filter offensive comments, failed to stop aggressive messages. Multi-block, intended to prevent harassment by new accounts from the same person, simply did not function.
When researchers paired the teenagers’ accounts with parental accounts, they discovered another major issue: parents did not receive information about the content their children were actually viewing. The supervision panel only displayed topics of interest selected by the teenager—failing to reveal that the algorithm was recommending inappropriate materials to them.
“We cannot waste any more time or allow more children to suffer due to Meta’s self-regulation,” the report’s authors conclude. In the USA, they call for the enactment of the Kids Online Safety Act, and in the UK, for the strengthening of the Online Safety Act.
Furthermore, the researchers urge for independent audits of safety features—similar to those applied to safety systems in cars. Will the technology industry finally take the protection of Instagram kids seriously? Or will it, as it has done so far, rely on yet more empty announcements to silence criticism? Ultimately, real change demands full transparency and robust, external oversight over Instagram and children’s safety.
Read this article in Polish: Instagram szkodzi dzieciom. Wyniki raportu są druzgoczące
Search
RECENT PRESS RELEASES
Related Post