⚖️ Section 230 🗣️ First Amendment 🤖 AI Liability
Product Design Liability; AI LIABILITY | Product Liability — Design Defect; AI LIABILITY | Negligence

Dowey v. Siems

🏛 Superior Court of the State of Delaware (removed to U.S. District Court for the District of Delaware) · 📅 2026-03-01 · Meta Platforms, Inc. (Instagram and Facebook)

Issue

Whether Meta is liable under product liability (design defect, failure to warn) and negligence theories for the deaths of minors who were sextorted by predators whom Meta's recommendation systems allegedly connected to the victims, or whether such claims are barred by Section 230 immunity.

What Happened

Plaintiffs—parents and estates of five minors who died by suicide after being sextorted on Instagram and Facebook—filed an amended complaint alleging strict product liability (design defect, failure to warn) and common law negligence against Meta. The complaint alleges Meta's algorithmic recommendation systems matched teen users to identified predators; that Meta collected personal data without informed consent and used it to facilitate these connections; that Meta failed to implement available safety features its own teams recommended; that Meta made false safety representations while internal testing showed it was "matchmaking children to adult predators"; and that Meta prioritized engagement metrics over user safety. Plaintiffs expressly frame their claims as product design and failure-to-warn theories—not as addiction-based harms—and allege one victim (L.M.) used Instagram for only two days before his death. The case is framed to avoid traditional Section 230 immunity by targeting Meta's own design choices, algorithmic systems, and failure to warn rather than third-party content publication.

Why It Matters

This case directly tests the boundaries of Section 230's design-defect carve-out post-*Moody v. NetChoice* and in light of the Supreme Court's non-decision in *Gonzalez v. Google*. Plaintiffs invoke the emerging theory—successful in *Garcia v. Character.AI*—that platform architectural choices, recommendation algorithms, and data-sharing features constitute the platform's own product design decisions outside Section 230's scope, particularly where the platform allegedly knew its systems were connecting minors to predators and declined to implement identified safeguards. If the court permits these claims to proceed past a motion to dismiss, it would reinforce a narrowing of Section 230 immunity for algorithmic harms and establish that platforms face tort exposure for design decisions that foreseeably facilitate criminal exploitation, even when the harmful content itself is user-generated.