Instagram flagged explicit messages to minors in 2018. Image-blurring arrived six years later
Meta took six years to blur explicit images on Instagram, even though internal emails show executives were aware in 2018 that minors were receiving them, according to newly unsealed court documents.
Introduction to Malware Binary Triage (IMBT) Course
Looking to level up your skills? Get 10% off using coupon code: MWNEWS10 for any flavor.
Enroll Now and Save 10%: Coupon Code MWNEWS10
Note: Affiliate link – your enrollment helps support this platform at no extra cost to you.
In a deposition given last year, Adam Mosseri (now the head of Instagram) discusses an email thread with Guy Rosen, Meta’s VP and chief information security officer at the time. Rosen explained in the thread that adults could find and message minors on the platform. The messages could contain what Rosen called:
“tier 2 sexual harassment, like dudes sending dick pics to everyone”
up to…
“tier 1 cases where they end up doing horrible damage.”
The tool Meta now uses to address the problem is a client-side classifier that automatically blurs explicit images sent to teens in direct messages. But it wasn’t rolled out until roughly six years after that email exchange, in September 2024.
The deposition was unsealed last week and filed on February 20, 2026, in MDL No. 3047 (Case No. 4:22-md-03047-YGR), a multidistrict litigation case in Northern California in which hundreds of families allege that platforms including Instagram were designed to maximize screen time at the expense of young users’ well-being. The filing is available through the court’s PACER docket.
Internal records reveal teen safety concerns at Meta
The filing also surfaces internal survey data that Instagram had kept confidential. Nearly one in five respondents aged 13 to 15 reported encountering unwanted nudity or sexual imagery on the platform. A further 8.4% of them said they had seen someone harm themselves or threaten to do so on Instagram within the past week.
Instagram’s own Transparency Center didn’t disclose this at the time. Its child-endangerment section stated simply that the company was still working on the numbers. Mosseri also confirmed he had never publicly shared an internal estimate of around 200,000 daily child users experiencing inappropriate interactions, a figure referenced during questioning.
His defence, and Meta’s, rests on the claim that the company was not idle during those six years. Mosseri told the court that other protections were introduced in the interim, including restrictions on adults messaging teens they are not connected to, and systems designed to flag potentially risky accounts.
He pushed back on the idea that parents should have been explicitly warned about unmonitored direct messages, arguing that the risk exists on many messaging platforms. Meta spokesperson Liza Crenshaw pointed to Teen Accounts and parental controls, saying the company has been working on the problem for years.
Other allegations against Meta
The nudity filter is not the only safety measure under scrutiny. Court filings in related proceedings allege Meta explored making teen accounts private by default as early as 2019, then dropped the idea over concerns it would damage engagement metrics. That default-private switch did not arrive until September 2024.
Whistleblower Arturo Béjar, a former Meta engineering director, told the US Senate in 2023 that he had raised teen safety concerns directly with Mosseri and other executives. He acknowledged that the company researched these harms extensively, but questioned whether it acted with sufficient urgency.
An independent audit published in September 2025 found that of 47 teen safety features Instagram publicly promoted, fewer than one in five functioned as described, according to the report’s findings.
Mosseri’s 2023 performance self-review, entered as an exhibit in the case, celebrated revenue at all-time highs and boasted about delivering results despite cutting his team by 13%. Teen well-being did not appear as a criteria in that review. He explained that well-being sat with a centralized Meta team, outside his direct remit.
In a courtroom asking whether Instagram’s leadership prioritised growth over safety, that distinction may not land the way he hopes.
We don’t just report on threats – we help protect your social media
Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.
1 post - 1 participant
Malware Analysis, News and Indicators - Latest topics