Unmasking the Cracks: Why Instagram's 'Teen Safety' Features Aren't Keeping Kids Safe
Share- Nishadil
- September 26, 2025
- 0 Comments
- 2 minutes read
- 5 Views

Beneath the veneer of vibrant filters and endless scrolling, Instagram has long promised a safe haven for its youngest users. Yet, a groundbreaking new report shatters this illusion, revealing critical vulnerabilities in the platform's much-lauded teen safety features. The findings paint a stark picture: the very safeguards meant to shield adolescents from harm are, in many cases, easily sidestepped, leaving a gaping hole in their digital protection.
The collaborative investigation by Reset Australia and the University of Technology Sydney (UTS) delves deep into the efficacy of Instagram's measures designed to prevent unwanted interactions from adults.
Their unsettling conclusion? These features, often highlighted by Meta as robust protections, fall short of their intended purpose. The report specifically scrutinizes the restriction that prevents adults from sending Direct Messages (DMs) to teens who don't follow them back. While seemingly a strong barrier, researchers demonstrated how simple workarounds can render this protection almost meaningless.
For instance, adult users can still initiate contact through comments on public posts, or by manipulating profile settings to appear as a teen themselves.
Beyond the DM loopholes, the study also cast a harsh light on Instagram's age verification processes. The report suggests that these systems are not robust enough to reliably prevent minors from misrepresenting their age, nor do they adequately deter adults from creating fake profiles that appear to be under 18.
This critical failure means that the foundational layer of protection—ensuring users are in their correct age category—is alarmingly weak. Furthermore, while Instagram defaults new teen accounts to 'private,' the report points out that this setting doesn't extend to all forms of content or interaction, and many teens might not understand how to fully lock down their profiles.
The implications of these findings are profound.
They underscore a pervasive issue where 'safety features' can provide a false sense of security, potentially exposing young users to risks like online grooming or harassment. Reset Australia and UTS are not just sounding an alarm; they are proposing concrete solutions. Their recommendations include implementing stricter, enforceable regulations for social media platforms, making default privacy settings truly comprehensive and non-negotiable for minors, and demanding independent auditing of these safety features to ensure genuine effectiveness, not just performative gestures.
They advocate for a design approach that prioritizes child safety above all else, rather than relying on teens to navigate complex privacy settings.
While Meta, Instagram's parent company, has previously stated its commitment to teen safety, often citing tools like 'Supervision' and age-gating, the research suggests these efforts are insufficient.
They assert that they are constantly working on new features and refining existing ones. However, critics argue that the onus should not be on children or their parents to constantly adapt to an evolving threat landscape. Instead, the platforms themselves must be held accountable for designing environments that are inherently safe by default.
This report serves as a crucial wake-up call, urging lawmakers, parents, and tech companies alike to re-evaluate what true online safety for the next generation truly means.
.Disclaimer: This article was generated in part using artificial intelligence and may contain errors or omissions. The content is provided for informational purposes only and does not constitute professional advice. We makes no representations or warranties regarding its accuracy, completeness, or reliability. Readers are advised to verify the information independently before relying on