AI Liability

Younge v. Altman

🏛 District Court, N.D. California · 1 filing
2026-04-29 Complaint AI Liability

Complaint

Issue: In *Younge v. Altman*, plaintiffs Lance Younge and Jennifer Geary argue that OpenAI and its CEO Samuel Altman owed a duty to warn law enforcement once their internal safety review identified a user who subsequently carried out a February 2026 mass shooting in Tumbler Ridge, British Columbia — and that the decision to remain silent, allegedly driven by IPO-related commercial considerations, constitutes actionable negligence. The case also asks whether family members who perceived the attack in real time by telephone, without physical presence at the scene, can satisfy California's bystander requirements for negligent infliction of emotional distress.

Plaintiffs filed this complaint on April 29, 2026 in the Northern District of California, initiating a case at the pre-answer stage with no responsive pleading yet filed. The complaint asserts a single cause of action — negligent infliction of emotional distress — against OpenAI's CEO and three OpenAI entities. Plaintiffs allege that OpenAI's safety team reviewed the shooter's account, identified a credible threat, and then made a deliberate decision not to contact the RCMP, while simultaneously publishing re-registration instructions that allegedly allowed the flagged user to return to the platform. The complaint layers four distinct negligence theories — a *Tarasoff*-style duty to warn, negligent re-entrustment, knowing design degradation for engagement purposes, and negligent undertaking with displacement of independent law-enforcement action — each structured to avoid characterizing OpenAI's conduct as a publishing decision protected by Section 230. Plaintiffs seek compensatory and punitive damages, along with attorneys' fees under California Code of Civil Procedure § 1021.5.

This complaint represents one of the most structurally deliberate attempts to date to construct a Section 230-resistant AI liability theory, and the architecture it proposes — stacking voluntary-undertaking, own-conduct, and design-defect framings to route around publisher immunity — is likely to be tested and refined through motion practice in ways that could influence how courts analyze AI developer duties more broadly. The negligent-undertaking-with-displacement theory is the complaint's most doctrinally plausible argument: if a platform voluntarily assumes a safety-review function and then makes an affirmative decision not to act on what that review reveals, a court could find that claim rests on the platform's own conduct rather than any publishing decision. The *Tarasoff* extension to a commercial AI platform and the telephone-based bystander NIED theory are the complaint's most exposed flanks and will face serious scrutiny at the 12(b)(6) stage, particularly given the absence of supporting authority for either. How the court addresses Section 230 preemption — conspicuously uncontested in the complaint — may prove the pivotal early question in this litigation.