

NEW YORK: Numerous safety features that Meta has said it has implemented to protect young users on Instagram over the years do not work well or, in some cases, don't exist, according to a report from child-safety advocacy groups that was corroborated by researchers at Northeastern University.
The study, which Meta disputed as misleading, comes amidst renewed pressure on tech companies to protect children and other vulnerable users of their social-media platforms.
Of 47 safety features tested, the groups judged only eight to be completely effective. The rest were either flawed, "no longer available or were substantially ineffective”, the report stated.
Features meant to prevent young users from surfacing self-harm-related content by blocking search terms were easily circumvented, the researchers reported. Anti-bullying message filters also failed to activate, even when prompted with the same harassing phrases Meta had used in a press release promoting them. And a feature meant to redirect teens from bingeing on self-harm-related content never triggered, the researchers found.
Researchers did find that some of the teen account safety features worked as advertised, such as a "quiet mode” meant to temporarily disable notifications at night and a feature requiring parents to approve changes to a child’s account settings.
Titled "Teen Accounts, Broken Promises”, the report compiled and analysed Instagram’s publicly announced updates of youth safety and well-being features going back more than a decade.
Two of the groups behind the report — Molly Rose Foundation in the United Kingdom and Parents for Safe Online Spaces in the US — were founded by parents who allege their children died as a result of bullying and self-harm content on the social-media company’s platforms.
The findings call into question Meta’s efforts "to protect teens from the worst parts of the platform”, said Laura Edelson, a professor at Northeastern University who oversaw a review of the findings. "Using realistic testing scenarios, we can see that many of Instagram's safety tools simply are not working”.
Meta — which on Thursday said it was expanding teen accounts to Facebook users internationally — called the findings erroneous and misleading.
"This report repeatedly misrepresents our efforts to empower parents and protect teens, misstating how our safety tools work and how millions of parents and teens are using them today”, said Meta spokesman Andy Stone.
Meta documents seen by the news agency show that as the company was promoting teen-safety features on Instagram last year, it was aware that some had significant flaws.
Safety staffers also acknowledged that a system to block search terms used by potential child predators wasn’t being updated in a timely fashion, according to internal documents and people familiar with Meta’s product development. — Reuters
Oman Observer is now on the WhatsApp channel. Click here