Doe 1 v. X.AI Corp.
Issue
Whether xAI Corp. and xAI LLC are civilly liable under federal child sexual abuse material statutes (18 U.S.C. §§ 2255, 2252A), the Trafficking Victims Protection Act (18 U.S.C. § 1595), California's right of publicity and unfair competition laws, and common-law strict liability and negligence theories for designing and deploying the Grok generative AI image and video model without industry-standard safety guardrails, thereby enabling the production, possession, and distribution of AI-generated CSAM depicting real minor plaintiffs.
What Happened
Plaintiffs Jane Doe 1, Jane Doe 2, and Jane Doe 3 (two of whom are minors proceeding through guardians) filed a putative class action complaint on March 16, 2026, in the Northern District of California against xAI Corp. and xAI LLC, alleging that defendants deliberately omitted industry-standard safety controls—including training-data filtering, pre- and post-inference filters, hash matching, and system-prompt restrictions—from their Grok generative AI image and video model, and further monetized the technology by licensing it to third-party developers abroad who sold subscriptions enabling CSAM production. Plaintiffs allege that xAI knowingly released and profited from Grok's capacity to generate photorealistic sexualized deepfakes of real individuals, including minors, and that defendants' systems produced, stored, and distributed AI-generated CSAM depicting each named plaintiff. The complaint asserts thirteen counts, including three claims under Masha's Law for production, distribution, and possession of child pornography; TVPA beneficiary liability; California statutory right of publicity; UCL violations; strict liability and negligence design defect; negligent undertaking; negligence per se; NIED; IIED; and public nuisance; and seeks damages, injunctive relief, and a jury trial on behalf of the named plaintiffs and a proposed nationwide class of similarly situated minor victims.
Why It Matters
This complaint represents one of the first attempts to impose direct federal CSAM statutory liability on a generative AI developer as an alleged producer and distributor—rather than merely a passive platform—based on the model's own output, a theory that, if accepted, could establish that AI-generated content triggers the same strict civil liability framework as human-produced CSAM and that deliberate omission of industry-standard safety guardrails constitutes an actionable design defect exposing AI developers to both tort and federal criminal-analog civil damages.
Related Filings
Other proceedings in the same litigation tracked by this monitor.
How accurate was this summary?