Meta begins automatically restricting teen users to more ‘age appropriate’ content
Share- Nishadil
- January 10, 2024
- 0 Comments
- 2 minutes read
- 5 Views
Meta announced plans to implement new privacy safeguards specifically aimed at better shielding teens and minors from online content related to graphic violence, eating disorders, and self harm. The new policy update for both Instagram and Facebook “in line with expert guidance” begins rolling out today and will be “fully in place… in the coming months,” according to the tech company.
[Related: Social media drama can hit teens hard at different ages .] All teen users’ account settings—categorized as “Sensitive Content Control” on Instagram and “Reduce” on Facebook—will automatically enroll in the new protections, while the same settings will be applied going forward on any newly created accounts of underage users.
All accounts of users 18 and under will be unable to opt out of the content restrictions. Teens will soon also begin receiving semiregular notification prompts recommending additional privacy settings. Enabling these recommendations using a single opt in toggle will automatically curtail who can repost the minor’s content, as well as restrict who is able to tag or mention them in their own posts.
“While we allow people to share content discussing their own struggles with suicide, self harm and eating disorders, our policy is not to recommend this content and we have been focused on ways to make it harder to find,” Meta explained in Tuesday’s announcement. Now, search results related to eating disorders, self harm, and suicide will be hidden for teens, with “expert resources” offered in their place.
A screenshot provided by Meta in its newsroom post , for example, shows links offering a contact helpline, messaging a friend, as well as “see suggestions from professionals outside of Meta.” [Related: Default end to end encryption is finally coming to Messenger and Facebook .] Users currently must be a minimum of 13 years old to sign up for Facebook and Instagram.
In a 2021 explainer , the company states it relies on a number of verification methods, including AI analysis and secure video selfie verification partnerships . Meta’s expanded content moderation policies arrive almost exactly one year after Seattle’s public school district filed a first of its kind lawsuit against major social media companies including Meta, Google, TikTok, ByteDance, and Snap.
School officials argued at the time that such platforms put profitability over their students’ mental wellbeing by fostering unhealthy online environments and addictive usage habits. As Engadget noted on Tuesday, 41 states including Arizona, California, Colorado, Connecticut, and Delaware filed a similar joint complaint against Meta in October 2023.
“Meta has been harming our children and teens, cultivating addiction to boost corporate profits,” California Attorney General Rob Bonta said at the time .”.