New Delhi : A Los Angeles jury on Wednesday found Meta and YouTube negligent in the design of their social media platforms and ordered them to pay $6 million in damages.
In reaching its verdict — the first of its kind — the jury relied heavily on a body of internal company documents, research studies and employee communications that had accumulated over years of whistleblower disclosures and litigation discovery.
YouTube, owned by Google, was assigned the remaining 30%. The bulk of the internal documentary evidence presented at trial related to Meta, a fact reflected in the jury’s decision to assign the company 70% of the liability.Together, the documents form a detailed record of what the companies knew about the effects of their products on young users, and when.
The harm research
The earliest and most extensive body of evidence emerged in September 2021, when former Facebook employee Frances Haugen provided internal documents to the Wall Street Journal. The Journal’s “Facebook Files” series revealed that Meta had conducted research over at least three years into Instagram’s effects on young users and repeatedly found the platform harmful, particularly to teenage girls.
Among these was a 2019 internal presentation, posted to Facebook’s internal message board, which stated: “We make body image issues worse for one in three teen girls.”
A March 2020 slide presentation found that 32% of teen girls said that when they felt bad about their bodies, Instagram made them feel worse, the Journal reported.
Among teens who reported suicidal thoughts, 13% of British users and 6% of American users traced the feeling to Instagram, according to another internal presentation reviewed by the Journal.
A separate finding showed 17% of teen girls said the platform worsened their eating disorders.
The research spanned multiple formats — focus groups, diary studies, online surveys and large-scale questionnaires involving tens of thousands of users, according to the Journal.
One study surveyed over 50,000 people across 10 countries, including India, and found that 48% of teenage girls said that they always or often compared their appearances to others on Instagram, according to documents later released to the United States Congress.
Facebook’s researchers also flagged the platform’s Explore page — which serves algorithmically curated content — as particularly harmful to young users.
“Aspects of Instagram exacerbate each other to create a perfect storm,” the research stated, according to the Journal. The researchers found teens described their own usage in what the documents called “an addict’s narrative” — they wished they could spend less time on it but could not stop.
When Meta chief executive Mark Zuckerberg was asked at a US congressional hearing in March 2021 whether the company had studied Instagram’s effects on children, he said he believed it had, the Journal reported. In May of that year, Instagram head Adam Mosseri said research suggested the app’s effects on teen mental health were “quite small.”
Courting young users
A separate category of internal documents, surfaced through litigation discovery in the KGM trial, detailed Meta’s strategy regarding young users. The plaintiff’s lawyers presented the jury with internal communications in which Meta executives discussed efforts to attract and retain children and teens on the platform, NPR reported.
One document stated: “If we wanna win big with teens, we must bring them in as tweens,” according to NPR and the Associated Press.
Another internal memo showed that 11-year-olds were four times as likely to keep returning to Instagram as users of competing apps, despite the platform requiring users to be at least 13 to create an account.
A 2015 internal review found four million children under 13 on Instagram, and a 2017 internal communication stated employees were “going after <13 year olds,” according to court documents cited in trial reporting.
Instagram did not require birthdates at sign-up until late 2019.
Internal documents also showed Instagram set a daily engagement target of 40 minutes per user in 2023, with plans to increase it to 46 minutes by 2026, IBTimes reported, citing court filings.
Parental controls: the company’s own findings
Among the most consequential documents to emerge at trial was an unpublished Meta research project called “Project MYST,” conducted in partnership with the University of Chicago.
The study surveyed 1,000 teens and their parents and found that common parental controls — time limits, supervision, restricted access — had little measurable impact on teens’ compulsive social media use.
The study also found that children who had experienced adverse life events such as family instability or bullying were particularly vulnerable to compulsive use.
Mosseri, testifying at trial, said he could not remember details of Project MYST beyond its name, though court documents suggested he had approved the research.
The findings were never published, and no warnings were issued to parents or teenagers based on the results of the research.
The warnings that went unheeded
In November 2023, a second Meta whistleblower, Arturo Béjar, testified before the US Senate Judiciary Subcommittee.
Béjar, a former Facebook engineering director who later consulted for Instagram, told senators he had warned Meta’s most senior executives — including Zuckerberg, then-chief operating officer Sheryl Sandberg, Mosseri and chief product officer Chris Cox — about the prevalence of harmful experiences on the platforms, CNBC and NPR reported.
Béjar cited internal survey data showing that 51% of Instagram users reported a bad or harmful experience within the previous week. Among users aged 13 to 15, one in eight reported receiving unwanted sexual advances on Instagram within the previous seven days, he testified.
Only 2% of reported harmful posts were taken down.
On October 5, 2021 — the same day Haugen testified before the Senate — Béjar emailed Zuckerberg directly with data he said validated her testimony. He never received a reply, he told senators. He described safety features subsequently introduced by Meta as “a placebo — a safety feature in name only to placate the press and regulators,” NPR reported.
Internal employee communications presented at trial offered a parallel picture. One Meta employee described Instagram as “like a drug” and said employees were “basically pushers,” according to CBS News.
Another communication stated: “We’re causing reward deficit disorder, because people are binging on Instagram so much they can’t feel the reward,” according to court documents cited in trial reporting.
Both companies have since introduced protections for young users. In September 2024, Meta launched “Teen Accounts” on Instagram, automatically placing users under 18 in private accounts with restricted messaging, content filters and notification limits; users under 16 need parental permission to change these settings. In October 2025, Meta updated the programme with content guidelines modelled on PG-13 movie ratings.
YouTube began using AI-driven age estimation in July 2025 to detect users under 18 and automatically restrict age-gated content and personalised advertising.
Evidence of how effective they have been has not been disclosed yet.
Both Meta and Google have said they disagree with the verdict and plan to appeal.


