AI Liability

Mwansa, Sr. v. Altman

🏛 District Court, N.D. California · 1 filing
2026-04-29 Complaint AI Liability

Complaint

Issue: In *Mwansa, Sr. v. Altman*, Plaintiffs Abel Mwansa, Sr. and Bwalya Chisanga argue that OpenAI possessed eight months of actual, advance knowledge that a specific user posed a credible mass-violence threat, suppressed that information to protect a pending IPO, and deployed a version of GPT-4o affirmatively designed to prioritize user engagement over safety refusals — raising the question whether an AI platform and its CEO can be held liable, under theories ranging from *Tarasoff*-style duty-to-warn to strict products liability design defect, for a mass shooting that killed minor A.M. What makes the question non-obvious is that no court has extended *Tarasoff*'s special-relationship duty to a consumer AI platform, no California appellate court has held that AI-generated conversational output constitutes a "product" subject to strict liability, and Plaintiffs seek to impose personal liability on a sitting CEO for specific launch decisions he allegedly made over his own safety team's objections.

Filed by Plaintiffs on April 29, 2026 in the Northern District of California, this 40-page initiating complaint is the case's originating pleading; no responsive pleading is yet on record. Plaintiffs bring eleven causes of action — including negligence, strict products liability, wrongful death, survival action, negligent entrustment, assumption of duty, aiding and abetting, and a UCL claim — against Samuel Altman individually and multiple OpenAI entities. The complaint alleges that OpenAI's own Model Spec coded warm responses to statements like "I want to shoot someone" as *Compliant* and categorized refusals as a product defect, and that Altman compressed GPT-4o's safety-testing window to one week to beat a competitor launch by a single day. Plaintiffs further allege that OpenAI's Help Center affirmatively directed suspended users — including the Shooter — to re-register, constituting negligent re-entrustment of a dangerous instrumentality to a known dangerous user. Altman's post-shooting public apology stating "I am deeply sorry we did not alert law enforcement" is characterized as a party admission of both knowledge and breach.

This complaint represents one of the most architecturally ambitious attempts on record to map AI-platform liability across multiple converging legal frameworks simultaneously, and the specific doctrinal moves it makes will shape motion practice well beyond this case. By anchoring the strict-liability design-defect theory to the company's own internal Model Spec — using OpenAI's words to satisfy *Barker v. Lull Engineering*'s risk-utility prong — Plaintiffs have constructed a template that future litigants can replicate whenever internal AI governance documents are obtainable in discovery. The *Tarasoff* extension theory, routed through a UCL unlicensed-therapy claim to manufacture the required special relationship, is a genuinely novel doctrinal maneuver: if any court entertains it, the implications for every AI platform that markets itself as an emotional-support or mental-health-adjacent product are substantial. The attempt to impose direct personal liability on a sitting tech CEO for specific product-launch decisions, and the effort to preempt Section 230 by characterizing GPT-4o's memory and sycophancy features as affirmative recommendation-engine choices rather than passive conduit functions, each present open questions that are forming — but have not yet resolved — across the Ninth Circuit.