Section 230 Other

IN RE: SOCIAL MEDIA ADOLESCENT ADDICTION/PERSONAL INJURY PRODUCTS LIABILITY LITIGATION

🏛 U.S. District Court for the Northern District of California · 📅 2022-10-06

Issue

In *In re: Social Media Adolescent Addiction/Personal Injury Products Liability Litigation*, defendants Meta and YouTube argue that Section 230 of the Communications Decency Act immunizes virtually every platform feature plaintiffs allege caused harm to adolescents — including recommendation algorithms, autoplay, infinite scroll, and engagement-maximizing notifications — on the theory that these constitute protected "publishing" decisions over third-party content rather than independent product design choices. The instruction also asserts that failure-to-warn claims are equally immunized, treating a platform's silence about its own design-generated harms as equivalent to an editorial decision about user-generated content — a position no circuit has cleanly endorsed.

What Happened

At the federal MDL pretrial stage in the Northern District of California, the Plaintiffs' Steering Committee filed a Joint Case Management Statement for the April 2026 Case Management Conference, attaching as Exhibit 2 a redline of Defendants' Revised Proposed Jury Instruction #18, titled "Protection for Publishing and Expressive Activity." The instruction is defendants' revised attempt to secure court adoption of a broad Section 230 immunity charge after a prior version was rejected at the March 18, 2026 Pretrial Conference (ECF No. 2837). The proposed instruction enumerates sixteen categories of protected platform conduct and structures the only viable path to liability — involving undefined "Non-Protected Conduct," a negligence finding, and a substantial-factor causation showing — as a compounding standard that must be satisfied simultaneously. A case-specific passage further instructs jurors to consider evidence of third-party bullying solely to explain a plaintiff's continued platform use, preemptively limiting how causation arguments based on harmful content exposure may be presented. No case citations appear within the instruction text; the Section 230 framework is invoked implicitly through the instruction's definitional structure.

Why It Matters

The platforms are asking the court to tell jurors, as a settled legal matter, that nearly everything plaintiffs challenge — recommendation algorithms, autoplay, infinite scroll, engagement notifications — is legally protected activity that cannot give rise to liability, effectively resolving the most contested open question in Section 230 law inside a jury trial rather than through a dispositive motion. The Supreme Court's 2023 *Gonzalez v. Google* decision deliberately left unresolved whether algorithmic amplification constitutes "publishing," meaning whatever the court decides about this instruction could become the most significant judicial statement on that question to emerge from this MDL. The court's prior rejection of an earlier version signals meaningful skepticism, and if the court issues a written ruling explaining why it again rejects or substantially rewrites the instruction, that order — not the instruction itself — may carry the greatest precedential weight for how future social media injury plaintiffs are permitted to frame their claims.

Related Filings

Other proceedings in the same litigation tracked by this monitor.