M.G. v. Altman
Issue
In *M.G. v. Altman*, plaintiffs argue that OpenAI and its CEO Samuel Altman bear legal responsibility for a mass shooting in Tumbler Ridge, British Columbia that injured twelve-year-old M.G., on the theory that OpenAI's internal safety team identified the shooter as a credible, imminent threat before the attack and was overruled by leadership — and that the company's deliberate engineering of ChatGPT to be emotionally immersive and engagement-maximizing, at the expense of violence-interruption safeguards, constitutes both a defective product and an actionable failure to warn law enforcement under *Tarasoff v. Regents of U.C.* The case asks, at its core, whether an AI company that possesses threat-specific user data, operates its own internal threat-assessment apparatus, and has affirmatively stripped categorical violence refusals from its system can be held liable in tort — and subject to punitive damages — for the downstream violence its product allegedly facilitated.
What Happened
On April 29, 2026, plaintiffs M.G., a minor represented by next friend Cia Edmonds, and Cia Edmonds individually, filed this nine-count complaint in the Northern District of California against Samuel Altman, OpenAI Foundation, OpenAI Group PBC, and OpenAI OpCo, LLC, represented by Edelson PC. The complaint is the initiating pleading in a newly opened case; no responsive pleadings or court orders exist. Plaintiffs allege that OpenAI's automated systems flagged the shooter's account in June 2025, that internal safety reviewers identified a credible and imminent threat and urged RCMP notification, and that leadership — including Altman personally — overruled them, a decision Altman subsequently acknowledged in a public apology letter dated April 24, 2026. The nine counts include negligence, strict product liability, negligent undertaking, negligent re-entrustment, aiding and abetting a mass shooting, and a claim under California's Unfair Competition Law; plaintiffs seek compensatory and punitive damages, injunctive relief, and attorneys' fees. Plaintiffs plead Altman's apology letter as a party-opponent admission of pre-incident actual knowledge to support the punitive-damages threshold of malice, oppression, or fraud under Cal. Civ. Code § 3294.
Why It Matters
This complaint is a significant stress-test of the current frontier of platform-liability doctrine because Edelson PC has deliberately layered three distinct theories to route around § 230 immunity simultaneously: the product-design carve-out, the assumption-of-duty doctrine, and the platforms-own-conduct theory — each targeting OpenAI's first-party engineering and executive decisions rather than user-generated content. The *Tarasoff* duty-to-warn count, if it survives a motion to dismiss, would be the first appellate-track holding to consider whether a commercial AI company operating a conversational system with internal threat-assessment capabilities can be treated as standing in a special relationship with foreseeable victims — a question with cascading consequences for every company in the generative AI industry. The interaction between California's strict-liability consumer-expectations test and a system engineered to adapt dynamically to each individual user is analytically uncharted, and this case is positioned to force a Ninth Circuit answer to whether generative AI outputs constitute immunized "content" or actionable "product." The use of a CEO's public apology as a party-admission of pre-incident knowledge is an evidentiary theory that, if credited at the pleading or trial stage, would reshape how AI executives communicate after catastrophic incidents industry-wide.
Related Filings
Other proceedings in the same litigation tracked by this monitor.
How accurate was this summary?