--> Skip to main content

Letest News

Steps taken by Meta to protect minors on Instagram and Facebook

Instagram and Facebook's parent company Meta have made several initiatives for the protection of children in recent years, especially due to the popularity of Instagram among teenagers. In September 2024, Meta launched the Instagram Teen Account, which automatically applies the strictest security settings of the platform for users under 18 years. To limit Direct Messages (DMs) only to those who follow the user or are followed by them, it restricts unwanted contacts and includes preventing exposure to sensitive content like self-harm or eating disorders. Parents or guardians have to allow the creation of an account for users under 13 years, and during sign-up, these protections are prompted to be set up.

Based on this, in July 2025, Meta expanded DM security features, such as providing more information about messaging (e.g., mutual connections) and detecting potential scammers or suspicious accounts. Earlier, in April 2024, Meta started testing the detection of nudity on the device in DMs to prevent misuse of sexual content and intimate images and to alert users before sending such material.

In October 2025, Meta presented "PG-13" material guidelines for teenage accounts, in which referrals were aligned with film ratings so that posts with harsh language, violence, or risky behavior (e.g., preventing teenagers from following such posting accounts) could be filtered. A strict "limited content" mode allows parents to implement more stringent restrictions. Meta also publishes Quarterly Community Standards reports, which include details of actively detecting and removing child exploitation materials — in the report of April–June 2025, they removed millions of infringing posts. On Facebook, similar tools such as the parental supervision dashboard and age-based content restrictions reflect these efforts, although more targeted updates focus on teenagers on Instagram.


Steps taken by Snap to protect minors on Snapchat

Snap Inc., behind Snapchat, has emphasized default privacy and parental monitoring for teenagers aged 13–17. Teen accounts are private by default, which means that only accepted friends can contact them or see their snaps and stories. Security measures have also been taken to limit interactions with non-friends. The Family Center launched in 2024 allows parents to see the active contacts of their adolescents, request their friends to join their list, and monitor recent conversations without accessing messages. Snapchat also enables two-factor authentication (2FA) for additional account protection and includes content filters to limit sensitive content in Stories and Spotlight (its short-video feature).

In August 2025, Snap recalibrated its detection system to identify and remove child sexual abuse material and added user reports with AI-powered proactive scanning, as stated in the transparency report from July–December 2024. In addition, in February 2025, Snap partnered with the "No 2 Protection" campaign of the U.S. Homeland Security Department to launch educational resources within the app, which inform adolescents about online risks such as grooming through interactive AR experiences. Visual risks were also addressed in Snap’s 2024 security updates, noting that 80% of Generation Z had faced online harm, which requires strict age verification during signup.


Evaluation of effectiveness

In spite of these measures, both companies have faced strong criticism. Independent reports and data show that there is limited effect on major risks such as grooming, mental health issues, and exposure to harmful material. Forty-seven Instagram security features were tested in a report released in September 2025 by the nonprofit organization Fairplay and researchers, and they found that 30 “are quite ineffective or are no longer present,” including tools blocking self-harm material — testers still accessed posts related to suicide in teen accounts. Only eight features were fully effective. Meta has been accused of being “misleading,” claiming that teenage accounts lead the industry in automated security, but the report highlights “deliberately designed options” that prioritize engagement over safety. A Washington Post investigation in September 2025 showed that Meta suppressed internal research on child safety risks in its VR products. The PG-13 updates of October 2025 are being viewed skeptically as “hollow appearances.”

Snap’s attempts show similar systemic problems. In an analysis conducted by Afterbell Substack in April 2025, Snapchat’s design — transient messaging and instant-adding features — was described as causing harm on an “industrial scale,” including grooming and “capping” (webcam exploitation), affecting millions of children. Data from UK Charity NSPCC in October 2024 showed that out of 1,824 recorded cases of Snapchat grooming, almost half involved the platform, the highest of any service. In ongoing lawsuits, Snap has been accused of promoting addiction, contributing to a crisis in adolescents’ mental health. While transparency reports show active removals (such as thousands of exploitation accounts at the end of 2024), critics argue that these reactive measures do not solve fundamental causes like anonymous friendships. Parental tools like Family Center are praised for their utility but criticized for the lack of real-time monitoring, making them insufficient against fast-paced risks.

Overall, both companies have invested in AI detection and parental control, but the effectiveness is weakened by inconsistent enforcement, engagement-based algorithms, and insufficient independent verification. Progress is gradual — for example, Meta’s removal rates have increased — but harm still continues, as 2024–2025 statistics show a rising risk of adolescent exploitation.


Suggestions to secure social media for children and adolescents

To improve safety meaningfully, companies, regulators, and families will have to cooperate on evidence-based changes. Experts from the American Academy of Pediatrics (AAP), the U.S. Surgeon General, and the NTIA provide the following suggestions:

1. Make age-appropriate design and verification compulsory:
Platforms should implement strict age limits with government-supported ID checks for children under 16 years and disable addictive features like infinite scrolling or notifications by default. Direct messaging with strangers should be completely restricted, and AI should be used to actively flag adult-teenage contact.

2. Improve parental and academic tools:
Introduce real-time alerts for risky content or contacts, as well as in-app education modules (similar to Snap’s DHS partnership but mandatory and interactive). Expand Family Center-style features. AAP suggests co-viewing sessions and doctor-led guidance for families.

3. Prioritize mental health and content moderation:
Reduce exposure to harmful material (e.g., body image pressure) and label posts with “health and safety” warnings. Following the Fairplay model, independent audits of safety tools should be conducted annually and publicly, with penalties for non-compliance.

4. Regulatory and ecosystem-wide support:
Governments should create laws like the UK’s Online Safety Act, which holds platforms accountable for social media harm and funds youth mental health resources. Encourage positive alternatives such as controlled community spaces and digital literacy training for teachers.

Implementing these measures can shift the ecosystem from reactive to preventive, but success depends on transparency and accountability among all stakeholders. Until then, parents should prioritize open discussions and establish boundaries instead of depending solely on app tools.



Comments

you might also like this

OpenAI's GPT-5 & ChatGPT-5, released Aug 7, 2025. Smarter AI with PhD-level reasoning, led by Sam Altman. Explore features & AGI impact

OpenAI has once again caught the attention of the tech world with the much-awaited launch of GPT-5, the latest and most advanced AI model that powers ChatGPT. This version, unveiled during an OpenAI livestream on August 7, 2025, is a significant milestone towards Artificial General Intelligence (AGI). Led by OpenAI CEO Sam Altman, the company is pushing the boundaries with a model that promises to be smarter, faster, and intuitive than ever before. Let's learn in detail about GPT-5, its release, features, and its significance for the future of AI. What Is GPT-5? OpenAI’s Most Advanced AI Yet GPT-5 is OpenAI's flagship language model, designed to take ChatGPT to new heights. Described by Sam Altman as a "PhD-level expert" in fields such as coding, writing, and logic, the model integrates the reasoning capabilities of OpenAI's experimental O-series models with the language proficiency of previous GPT models. Unlike its predecessors, GPT-5 eliminates the need for use...

Sydney Sweeney: Navigating Fame, Fashion, and Controversy in 2025

Controversy Erupts: Sydney’s AE Campaign Sydney Sweeney, the 27-year-old star of Euphoria and The White Lotus , has become a household name not only for her acting skills, but also for her high-profile brand collaborations and occasional controversies. In 2025, Sweeney's partnership with American Eagle for the "Sydney Sweeney Has Great Jeans" campaign has sparked a significant debate, highlighting the complexities of celebrity endorsements, public image management, and the entertainment industry's changing trends. This article discusses the latest news surrounding Sydney Sweeney, her American Eagle ad controversy, the direction of her career, and the broader impact of celebrity marketing strategies, while also providing insights optimized for engagement and relevance. Sydney Sweeney's American Eagle ad campaign: A storm of controversy In July 2025, American Eagle launched its fall denim campaign featuring Sydney Sweeney, titled "Sydney Sweeney Has Great Jean...

James Gunn Shares Historic 1940 Superman Photo on Thanksgiving, Thanks Fans

Hollywood’s renowned director and DC Studios co-CEO James Gunn posted a nostalgic update on Thanksgiving Day, featuring the very first Macy’s Thanksgiving Day Parade float of Superman from 1940. The black-and-white photo shows a gigantic Superman balloon soaring above the streets of New York, with vintage billboards in the background advertising “Planters Peanuts,” “Coca-Cola,” and “Loew’s,” perfectly capturing the charm of that era. ⚙️ Step 3: Preparing Your Download (45s) Loading... Wait... In his post on X (formerly Twitter), James Gunn wrote: “The first Superman float in the Macy’s Thanksgiving Day Parade, 1940. Today I’m thankful for all the fans who have supported DC Studios over the past three years. The work itself is fun: crafting new stories with the world’s most iconic characters, but your love, support, laughter, and insights make it even better. Thank you!! ❤️” Posted on November 27 (Thanksgiving Day in the US), the tweet has already garnered over 26,00...

Spurs Crush Lakers by 13 Points – Stephon Castle Steals the Show in Front of LeBron!

Los Angeles. On the night of December 10, 2025, something unbelievable happened at Crypto.com Arena. The San Antonio Spurs stunned the Los Angeles Lakers 132-119 in the NBA Cup quarterfinal and knocked them clean out of the tournament. Victor Wembanyama didn’t play a single minute because of injury, yet the Spurs completely dominated the Lakers. Stephon Castle Single-Handedly Turned the Game Upside Down The 20-year-old kid was just coming back from an ankle injury, stepped on the floor and straight-up erupted: 30 points, 10 rebounds, 6 assists. He dropped 21 of those points in the second half alone. The Spurs jumped out to a 39-28 first quarter and never looked back. They rained 17 three-pointers and went 29-of-36 from the free-throw line – that was the difference. On the Lakers side, Luka Dončić scored 35 and LeBron James posted 19 points with 15 rebounds, but their defense was nowhere to be found. The sweetest moment came after the final buzzer. LeBron James walked straight to Stepho...
©2025 - Pressqouta.in | All rights reserved.