AI Liability

Doe 1 v. X.AI Corp.

🏛 District Court, N.D. California · 2 filings
2026-03-16 Complaint AI Liability

MOTION to Relate Case — Attachment 3

Issue: Whether two putative class actions against x.AI Corp. and x.AI LLC — one brought by adult plaintiffs and one by minor plaintiffs — alleging that the Grok generative AI system produced nonconsensual sexualized images through a defectively designed product should be designated as related under N.D. Cal. Civil Local Rule 3-12 and assigned to a single judge.

On January 23, 2026, an adult plaintiff filed a putative class action (No. 5:26-cv-00772) against x.AI Corp. and x.AI LLC, assigned to Judge P. Casey Pitts. On March 16, 2026, a second putative class action on behalf of minor plaintiffs (No. 5:26-cv-02246) was filed against the same defendants but had not yet been assigned to a district judge. Plaintiffs in the minor action filed this administrative motion seeking relation of the two cases under Local Rule 3-12, arguing both actions involve the same defendants, the same allegedly defective Grok AI system, the same failure to implement industry-standard guardrails, and the same overlapping causes of action — including strict liability for design defect, negligence, intentional and negligent infliction of emotional distress, violation of California's Unfair Competition Law, violation of California's Statutory Right of Publicity, and public nuisance.

This motion signals the emergence of parallel, coordinated class action litigation against a generative AI developer premised on product liability and tort theories for AI-generated nonconsensual intimate imagery, with the consolidation effort potentially positioning a single court to develop unified precedent on whether strict liability design-defect and negligence frameworks apply to generative AI outputs.

2026-03-16 Complaint AI Liability

Complaint

Issue: Whether xAI Corp. and xAI LLC are civilly liable under federal child sexual abuse material statutes (18 U.S.C. §§ 2255, 2252A), the Trafficking Victims Protection Act (18 U.S.C. § 1595), California's right of publicity and unfair competition laws, and common-law strict liability and negligence theories for designing and deploying the Grok generative AI image and video model without industry-standard safety guardrails, thereby enabling the production, possession, and distribution of AI-generated CSAM depicting real minor plaintiffs.

Plaintiffs Jane Doe 1, Jane Doe 2, and Jane Doe 3 (two of whom are minors proceeding through guardians) filed a putative class action complaint on March 16, 2026, in the Northern District of California against xAI Corp. and xAI LLC, alleging that defendants deliberately omitted industry-standard safety controls—including training-data filtering, pre- and post-inference filters, hash matching, and system-prompt restrictions—from their Grok generative AI image and video model, and further monetized the technology by licensing it to third-party developers abroad who sold subscriptions enabling CSAM production. Plaintiffs allege that xAI knowingly released and profited from Grok's capacity to generate photorealistic sexualized deepfakes of real individuals, including minors, and that defendants' systems produced, stored, and distributed AI-generated CSAM depicting each named plaintiff. The complaint asserts thirteen counts, including three claims under Masha's Law for production, distribution, and possession of child pornography; TVPA beneficiary liability; California statutory right of publicity; UCL violations; strict liability and negligence design defect; negligent undertaking; negligence per se; NIED; IIED; and public nuisance; and seeks damages, injunctive relief, and a jury trial on behalf of the named plaintiffs and a proposed nationwide class of similarly situated minor victims.

This complaint represents one of the first attempts to impose direct federal CSAM statutory liability on a generative AI developer as an alleged producer and distributor—rather than merely a passive platform—based on the model's own output, a theory that, if accepted, could establish that AI-generated content triggers the same strict civil liability framework as human-produced CSAM and that deliberate omission of industry-standard safety guardrails constitutes an actionable design defect exposing AI developers to both tort and federal criminal-analog civil damages.

Related Commentary