Instagram and Facebook's parent company Meta have made several initiatives for the protection of children in recent years, especially due to the popularity of Instagram among teenagers. In September 2024, Meta launched the Instagram Teen Account, which automatically applies the strictest security settings of the platform for users under 18 years. To limit Direct Messages (DMs) only to those who follow the user or are followed by them, it restricts unwanted contacts and includes preventing exposure to sensitive content like self-harm or eating disorders. Parents or guardians have to allow the creation of an account for users under 13 years, and during sign-up, these protections are prompted to be set up.
Based on this, in July 2025, Meta expanded DM security features, such as providing more information about messaging (e.g., mutual connections) and detecting potential scammers or suspicious accounts. Earlier, in April 2024, Meta started testing the detection of nudity on the device in DMs to prevent misuse of sexual content and intimate images and to alert users before sending such material.
In October 2025, Meta presented "PG-13" material guidelines for teenage accounts, in which referrals were aligned with film ratings so that posts with harsh language, violence, or risky behavior (e.g., preventing teenagers from following such posting accounts) could be filtered. A strict "limited content" mode allows parents to implement more stringent restrictions. Meta also publishes Quarterly Community Standards reports, which include details of actively detecting and removing child exploitation materials — in the report of April–June 2025, they removed millions of infringing posts. On Facebook, similar tools such as the parental supervision dashboard and age-based content restrictions reflect these efforts, although more targeted updates focus on teenagers on Instagram.
Steps taken by Snap to protect minors on Snapchat
Snap Inc., behind Snapchat, has emphasized default privacy and parental monitoring for teenagers aged 13–17. Teen accounts are private by default, which means that only accepted friends can contact them or see their snaps and stories. Security measures have also been taken to limit interactions with non-friends. The Family Center launched in 2024 allows parents to see the active contacts of their adolescents, request their friends to join their list, and monitor recent conversations without accessing messages. Snapchat also enables two-factor authentication (2FA) for additional account protection and includes content filters to limit sensitive content in Stories and Spotlight (its short-video feature).
In August 2025, Snap recalibrated its detection system to identify and remove child sexual abuse material and added user reports with AI-powered proactive scanning, as stated in the transparency report from July–December 2024. In addition, in February 2025, Snap partnered with the "No 2 Protection" campaign of the U.S. Homeland Security Department to launch educational resources within the app, which inform adolescents about online risks such as grooming through interactive AR experiences. Visual risks were also addressed in Snap’s 2024 security updates, noting that 80% of Generation Z had faced online harm, which requires strict age verification during signup.
Evaluation of effectiveness
In spite of these measures, both companies have faced strong criticism. Independent reports and data show that there is limited effect on major risks such as grooming, mental health issues, and exposure to harmful material. Forty-seven Instagram security features were tested in a report released in September 2025 by the nonprofit organization Fairplay and researchers, and they found that 30 “are quite ineffective or are no longer present,” including tools blocking self-harm material — testers still accessed posts related to suicide in teen accounts. Only eight features were fully effective. Meta has been accused of being “misleading,” claiming that teenage accounts lead the industry in automated security, but the report highlights “deliberately designed options” that prioritize engagement over safety. A Washington Post investigation in September 2025 showed that Meta suppressed internal research on child safety risks in its VR products. The PG-13 updates of October 2025 are being viewed skeptically as “hollow appearances.”
Snap’s attempts show similar systemic problems. In an analysis conducted by Afterbell Substack in April 2025, Snapchat’s design — transient messaging and instant-adding features — was described as causing harm on an “industrial scale,” including grooming and “capping” (webcam exploitation), affecting millions of children. Data from UK Charity NSPCC in October 2024 showed that out of 1,824 recorded cases of Snapchat grooming, almost half involved the platform, the highest of any service. In ongoing lawsuits, Snap has been accused of promoting addiction, contributing to a crisis in adolescents’ mental health. While transparency reports show active removals (such as thousands of exploitation accounts at the end of 2024), critics argue that these reactive measures do not solve fundamental causes like anonymous friendships. Parental tools like Family Center are praised for their utility but criticized for the lack of real-time monitoring, making them insufficient against fast-paced risks.
Overall, both companies have invested in AI detection and parental control, but the effectiveness is weakened by inconsistent enforcement, engagement-based algorithms, and insufficient independent verification. Progress is gradual — for example, Meta’s removal rates have increased — but harm still continues, as 2024–2025 statistics show a rising risk of adolescent exploitation.
Suggestions to secure social media for children and adolescents
To improve safety meaningfully, companies, regulators, and families will have to cooperate on evidence-based changes. Experts from the American Academy of Pediatrics (AAP), the U.S. Surgeon General, and the NTIA provide the following suggestions:
1. Make age-appropriate design and verification compulsory:
Platforms should implement strict age limits with government-supported ID checks for children under 16 years and disable addictive features like infinite scrolling or notifications by default. Direct messaging with strangers should be completely restricted, and AI should be used to actively flag adult-teenage contact.
2. Improve parental and academic tools:
Introduce real-time alerts for risky content or contacts, as well as in-app education modules (similar to Snap’s DHS partnership but mandatory and interactive). Expand Family Center-style features. AAP suggests co-viewing sessions and doctor-led guidance for families.
3. Prioritize mental health and content moderation:
Reduce exposure to harmful material (e.g., body image pressure) and label posts with “health and safety” warnings. Following the Fairplay model, independent audits of safety tools should be conducted annually and publicly, with penalties for non-compliance.
4. Regulatory and ecosystem-wide support:
Governments should create laws like the UK’s Online Safety Act, which holds platforms accountable for social media harm and funds youth mental health resources. Encourage positive alternatives such as controlled community spaces and digital literacy training for teachers.
Implementing these measures can shift the ecosystem from reactive to preventive, but success depends on transparency and accountability among all stakeholders. Until then, parents should prioritize open discussions and establish boundaries instead of depending solely on app tools.


Comments
Post a Comment