Browse Cases
207 resultsStacey v. Altman
Issue: In *Stacey v. Altman*, plaintiff Mark Stacey argues that OpenAI and its CEO Samuel Altman bear tort liability — under negligence, strict products liability, wrongful death, and California's UCL — for deaths arising from a mass shooting by a user whose violent planning was allegedly sustained and validated by ChatGPT over months. The case raises the non-obvious question of whether an AI company's internal architectural choices — including removal of categorical violence-refusal protocols and addition of features that reinforce user ideation — can ground product-defect and *Tarasoff*-style duty-to-warn claims, particularly where the company's own safety team had identified and banned the user's account before functionally re-enabling access through support-channel instructions.
Why It Matters: This complaint is significant not for any ruling it produces but for how it assembles, in a single high-profile pleading, several of the most consequential open questions in AI tort law simultaneously. The design-defect framing — anchored to specific, attributable architectural choices rather than to ChatGPT's conversational outputs — is a deliberate attempt to occupy the "own conduct" lane that courts have carved out from Section 230 immunity in cases like *Lemmon v. Snap*, and its viability at the pleading stage turns on whether courts will treat AI model architecture as sufficiently distinct from expressive output. The *Tarasoff* extension, while legally vulnerable if it rests solely on the unlicensed-psychotherapy predicate, carries an independent and doctrinally stronger assumption-of-duty theory grounded in OpenAI's own internal threat identification and the support-channel re-enablement sequence. If a motion to dismiss reaches the design-defect and assumption-of-duty theories, the court's analysis could set an early and influential marker on how existing tort frameworks apply to AI product liability — making future dispositive motions in this case worth close attention.
View on CourtListener →Mwansa, Sr. v. Altman
Issue: In *Mwansa, Sr. v. Altman*, Plaintiffs Abel Mwansa, Sr. and Bwalya Chisanga argue that OpenAI possessed eight months of actual, advance knowledge that a specific user posed a credible mass-violence threat, suppressed that information to protect a pending IPO, and deployed a version of GPT-4o affirmatively designed to prioritize user engagement over safety refusals — raising the question whether an AI platform and its CEO can be held liable, under theories ranging from *Tarasoff*-style duty-to-warn to strict products liability design defect, for a mass shooting that killed minor A.M. What makes the question non-obvious is that no court has extended *Tarasoff*'s special-relationship duty to a consumer AI platform, no California appellate court has held that AI-generated conversational output constitutes a "product" subject to strict liability, and Plaintiffs seek to impose personal liability on a sitting CEO for specific launch decisions he allegedly made over his own safety team's objections.
Why It Matters: This complaint represents one of the most architecturally ambitious attempts on record to map AI-platform liability across multiple converging legal frameworks simultaneously, and the specific doctrinal moves it makes will shape motion practice well beyond this case. By anchoring the strict-liability design-defect theory to the company's own internal Model Spec — using OpenAI's words to satisfy *Barker v. Lull Engineering*'s risk-utility prong — Plaintiffs have constructed a template that future litigants can replicate whenever internal AI governance documents are obtainable in discovery. The *Tarasoff* extension theory, routed through a UCL unlicensed-therapy claim to manufacture the required special relationship, is a genuinely novel doctrinal maneuver: if any court entertains it, the implications for every AI platform that markets itself as an emotional-support or mental-health-adjacent product are substantial. The attempt to impose direct personal liability on a sitting tech CEO for specific product-launch decisions, and the effort to preempt Section 230 by characterizing GPT-4o's memory and sycophancy features as affirmative recommendation-engine choices rather than passive conduit functions, each present open questions that are forming — but have not yet resolved — across the Ninth Circuit.
View on CourtListener →M.G. v. Altman
Issue: In *M.G. v. Altman*, plaintiffs argue that OpenAI and its CEO Samuel Altman bear legal responsibility for a mass shooting in Tumbler Ridge, British Columbia that injured twelve-year-old M.G., on the theory that OpenAI's internal safety team identified the shooter as a credible, imminent threat before the attack and was overruled by leadership — and that the company's deliberate engineering of ChatGPT to be emotionally immersive and engagement-maximizing, at the expense of violence-interruption safeguards, constitutes both a defective product and an actionable failure to warn law enforcement under *Tarasoff v. Regents of U.C.* The case asks, at its core, whether an AI company that possesses threat-specific user data, operates its own internal threat-assessment apparatus, and has affirmatively stripped categorical violence refusals from its system can be held liable in tort — and subject to punitive damages — for the downstream violence its product allegedly facilitated.
Why It Matters: This complaint is a significant stress-test of the current frontier of platform-liability doctrine because Edelson PC has deliberately layered three distinct theories to route around § 230 immunity simultaneously: the product-design carve-out, the assumption-of-duty doctrine, and the platforms-own-conduct theory — each targeting OpenAI's first-party engineering and executive decisions rather than user-generated content. The *Tarasoff* duty-to-warn count, if it survives a motion to dismiss, would be the first appellate-track holding to consider whether a commercial AI company operating a conversational system with internal threat-assessment capabilities can be treated as standing in a special relationship with foreseeable victims — a question with cascading consequences for every company in the generative AI industry. The interaction between California's strict-liability consumer-expectations test and a system engineered to adapt dynamically to each individual user is analytically uncharted, and this case is positioned to force a Ninth Circuit answer to whether generative AI outputs constitute immunized "content" or actionable "product." The use of a CEO's public apology as a party-admission of pre-incident knowledge is an evidentiary theory that, if credited at the pleading or trial stage, would reshape how AI executives communicate after catastrophic incidents industry-wide.
View on CourtListener →Lampert v. Altman
Issue: In *Lampert v. Altman*, Plaintiff Sarah Lampert argues that OpenAI owed a *Tarasoff*-style duty to warn law enforcement after its own review team flagged a user as a credible, imminent threat — and that when leadership overruled that recommendation to protect a pending IPO valuation, it became legally responsible for a mass shooting that killed twelve-year-old T.L. The case also asks whether GPT-4o's pre-deployment architectural choices — including sycophancy tuning, memory persistence, and the deliberate removal of categorical refusal protocols — constitute actionable design defects under California strict liability, or whether those claims collapse into § 230-protected publisher activity because conversational outputs cannot be cleanly separated from the underlying content they generate.
Why It Matters: This complaint is among the most structurally ambitious attempts yet to hold an AI company liable for real-world violence, and its doctrinal significance lies less in any single theory than in the layered architecture of its § 230 avoidance strategy: each cause of action is independently routed through "platform's own conduct" — internal overruled safety decisions, pre-deployment design choices, and post-deactivation re-entrustment — rather than through anything the Shooter said or that OpenAI published. If any one of those tracks survives a threshold § 230 motion, it would represent a meaningful expansion of AI-company liability under existing product-design doctrine as developed in *Lemmon v. Snap* and the fractured *Gonzalez* panel. The *Tarasoff* extension theory and the unlicensed-practice-of-psychology UCL prong are each without direct precedent and, if credited even partially, would open lines of duty against AI developers that no court has yet recognized. Courts and practitioners building AI liability frameworks will watch this case for how the Northern District resolves the foundational question of whether an AI system's conversational design is a separable "product feature" or is constitutionally inseparable from the third-party content it generates.
View on CourtListener →Younge v. Altman
Issue: In *Younge v. Altman*, plaintiffs Lance Younge and Jennifer Geary argue that OpenAI and its CEO Samuel Altman owed a duty to warn law enforcement once their internal safety review identified a user who subsequently carried out a February 2026 mass shooting in Tumbler Ridge, British Columbia — and that the decision to remain silent, allegedly driven by IPO-related commercial considerations, constitutes actionable negligence. The case also asks whether family members who perceived the attack in real time by telephone, without physical presence at the scene, can satisfy California's bystander requirements for negligent infliction of emotional distress.
Why It Matters: This complaint represents one of the most structurally deliberate attempts to date to construct a Section 230-resistant AI liability theory, and the architecture it proposes — stacking voluntary-undertaking, own-conduct, and design-defect framings to route around publisher immunity — is likely to be tested and refined through motion practice in ways that could influence how courts analyze AI developer duties more broadly. The negligent-undertaking-with-displacement theory is the complaint's most doctrinally plausible argument: if a platform voluntarily assumes a safety-review function and then makes an affirmative decision not to act on what that review reveals, a court could find that claim rests on the platform's own conduct rather than any publishing decision. The *Tarasoff* extension to a commercial AI platform and the telephone-based bystander NIED theory are the complaint's most exposed flanks and will face serious scrutiny at the 12(b)(6) stage, particularly given the absence of supporting authority for either. How the court addresses Section 230 preemption — conspicuously uncontested in the complaint — may prove the pivotal early question in this litigation.
View on CourtListener →NetChoice v. Ellison
Issue: In *NetChoice v. Ellison*, plaintiff NetChoice argues that a Minnesota law requiring social media platforms to display a state-authored mental-health warning to every user at the start of every session — with no option for users to permanently dismiss it — unconstitutionally compels private speech in violation of the First Amendment. The case turns on whether the more demanding strict-scrutiny standard applies, or whether the government can defend the mandate under the more permissive *Zauderer* framework, which permits rational-basis review for purely factual disclosures in commercial advertising contexts. The question is made legally significant because no Supreme Court ruling has definitively settled whether *Zauderer* can reach beyond its original advertising context to cover a warning displayed across a general-purpose speech platform.
Why It Matters: Minnesota's law is among the first state social-media warning-label statutes positioned to take effect following the wave of legislation enacted between 2023 and 2025, meaning a ruling here — even at the preliminary injunction stage — will carry significant persuasive weight in the dozen or more similar cases still pending in federal courts. A preliminary injunction granted by the District of Minnesota would further solidify the pattern of judicial resistance to state-mandated mental-health warnings, while a decision allowing enforcement could fracture that emerging consensus and accelerate the path to Supreme Court review. The case also gives a federal court an early opportunity to address the question *NIFLA* deliberately left open — whether *Zauderer* survives outside the commercial-advertising context at all — in a setting where the regulated entity is simultaneously a commercial service and a major forum for protected speech.
View on CourtListener →Commonwealth v. Meta Platforms, Inc.
Issue: Commonwealth v. Meta Platforms, Inc.* asks whether Section 230 of the Communications Decency Act bars Massachusetts consumer protection and public nuisance claims against Meta arising from Instagram's deliberate engineering of features—including infinite scroll, autoplay, intermittent variable-reward notifications, and ephemeral content—designed to exploit adolescent neurological vulnerabilities. The question is non-obvious because Meta's algorithmic and design choices are intertwined with the platform's publication of third-party content, and federal courts have divided sharply on whether claims targeting such features are shielded as inherent to a publisher's role or survive as challenges to a platform's independent engineering decisions.
Why It Matters: This ruling introduces a structurally distinct analytical framework—requiring both a dissemination element and a content element to trigger Section 230 immunity—that most federal courts have not articulated at this level of precision, and it squarely holds that addictive-design features are content-neutral as a matter of law because their alleged harm is independent of what any third party posts. By explicitly criticizing the N.D. Cal. MDL decisions and flagging the pending Ninth Circuit appeal in *California v. Meta Platforms* as presenting the same issues, the SJC openly anticipates a federal-state conflict that could fragment the national legal landscape for every state attorney general pursuing analogous claims. Significant questions remain open on remand, including Meta's dormant Commerce Clause, First Amendment, and other preemption defenses—any of which could independently limit or defeat the claims—and the opinion leaves unresolved where the line falls for features that curate or rank third-party content rather than merely delivering it through an engineered format.
View on CourtListener →X. AI LLC v. Weiser
Why It Matters: The complaint is worth watching because it advances a theory — that a state disparate-outcome liability regime is constitutionally equivalent to commanded racial classification — that, if accepted, would create significant friction with decades of federal disparate-impact jurisprudence under Title VII, ECOA, and the Fair Housing Act, frameworks the federal government itself administers. Count Two presents the stronger and more doctrinally grounded question: whether an explicit statutory authorization for race- or sex-conscious AI adjustments can survive strict or intermediate scrutiny absent the specific evidentiary findings *Croson* and its progeny demand, and that question is likely to survive early motion practice. The case is also a leading indicator of how the federal government intends to use constitutional litigation — rather than preemption doctrine — as a tool to contest state AI regulation, a strategic choice with broad implications for the emerging field of algorithmic governance. How the district court treats the *SFFA* "zero-sum" importation in a non-admissions context may become the most consequential doctrinal development to emerge from this litigation.
View on CourtListener →Why It Matters: This is among the first direct constitutional challenges to a state AI-regulation statute, and the court's treatment of xAI's compelled-speech theory will signal how far *303 Creative* and *Moody v. NetChoice* extend into the emerging AI regulatory space. The case puts in direct tension two competing post-*NIFLA* frameworks: the state's likely characterization of SB24-205 as conduct-based consumer protection, and xAI's characterization of algorithmic curation as protected editorial judgment—a question with implications for every AI company subject to state AI laws modeled on Colorado's. The Dormant Commerce Clause and vagueness claims, if successful, could invalidate "doing business in state" AI compliance triggers more broadly and constrain how states may delegate definitional authority to regulators in technology statutes. Colorado is not alone—similar legislation is advancing in other states—so the outcome here is likely to be watched as a template for or against constitutional challenges to the AI regulatory wave.
View on CourtListener →Doe/Podslurp v. Department of Homeland Security
Issue: In *Doe/Podslurp v. Department of Homeland Security*, an anonymous X/Twitter user argues that a federal administrative summons issued under 19 U.S.C. § 1509 — a customs-enforcement statute — cannot lawfully be used to compel a social media platform to reveal the identity of someone who posted criticism of a federal agent involved in a high-profile shooting death. The question is non-obvious because § 1509 grants DHS broad summons authority over records related to "laws administered by CBP and ICE," which DHS may argue extends to post-9/11 immigration enforcement functions, while Movant contends the statute was never designed to reach domestic political speech. The case also asks whether the First Amendment's protection of anonymous speech, and the Supreme Court's 2023 reformulation of the true-threats doctrine in *Counterman v. Colorado*, independently bar the government from unmasking the speaker before any charge is filed.
Why It Matters: This filing raises what appears to be a first-impression challenge to DHS's use of § 1509 as an identity-unmasking tool directed at domestic political speech, a practice a 2017 OIG report suggests has occurred systematically but that no court has squarely addressed. If the court reaches the First Amendment question, it would be among the first to apply *Counterman v. Colorado*'s 2023 subjective-recklessness requirement in the context of a government-initiated administrative summons — before any criminal charge — rather than in an ongoing prosecution, a doctrinal gap of genuine significance. The case also tests whether the *Florida Star*/*Smith v. Daily Mail* line of cases, which bars civil liability for truthful publication of lawfully obtained public information, constrains government investigative authority at the summons stage, a question those decisions did not resolve. A ruling quashing the summons could meaningfully limit the government's ability to use customs-era administrative tools as first-step instruments for identifying anonymous online critics of federal officials.
View on CourtListener →Anthropic, PBC v. United States Department of War, et al.
Issue: In *Anthropic, PBC v. United States Department of War, et al.*, the defendant-appellants argue that the Ninth Circuit should hold its interlocutory appeal in abeyance pending the D.C. Circuit's resolution of a parallel challenge to the same supply chain risk designations — raising the question of whether one circuit's expedited review of overlapping statutory questions justifies suspending an independent appellate proceeding in a sister circuit. The question is non-obvious because the two proceedings rest on distinct statutory authorities (10 U.S.C. § 3252 and 41 U.S.C. § 4713), the district court's injunction also covers social-media conduct not before the D.C. Circuit, and Anthropic has pressed constitutional claims that would survive any purely statutory ruling in the government's favor.
Why It Matters: The government is asking the Ninth Circuit to pause and let the D.C. Circuit go first — a tactically sensible request if the government anticipates a favorable ruling there that could undermine Anthropic's position in both forums. The practical stakes are asymmetric: abeyance would delay any Ninth Circuit ruling while the existing preliminary injunction remains nominally in place, but the government is simultaneously arguing in Washington that no injunction should exist at all. The motion's most contestable claim — that a favorable D.C. Circuit ruling on § 4713 would practically dissolve the § 3252 injunction — is legally underdeveloped and gives Anthropic a clear target in opposition, since the two statutes are independent grants of authority and the district court's injunction rests on additional constitutional grounds the D.C. Circuit will not reach. More broadly, the case surfaces an open and consequential question: when the same executive action is challenged simultaneously in multiple circuits under distinct legal frameworks, what weight — if any — should one circuit give to a sister circuit's expedited schedule? If the Ninth Circuit denies abeyance and the circuits diverge, pressure for en banc or Supreme Court review of the underlying designation authority would intensify quickly.
View on CourtListener →Doe v. Perplexity AI, Inc.
Why It Matters: Doe v. Perplexity AI is significant because Perplexity's business model — generating direct, synthesized answer-engine responses rather than hosting third-party content — places it at the frontier of the unresolved question of whether Section 230 immunizes AI-generated output or whether the AI developer is itself the "information content provider" stripped of immunity; it also implicates the Garcia v. Character Technologies question of whether AI-generated outputs constitute protected speech at the pleading stage, and may help define the duty-of-care standard for AI answer engines that represent their outputs as factually accurate.
View on CourtListener →Why It Matters: This case sits at the intersection of all three newsletter pillars and implicates the unresolved question of whether Section 230 immunizes AI-generated search output or whether Perplexity, as the system generating the content, is itself the information content provider and thus unprotected — a direct test of Priority Tracking Areas 3, 8, and 9. Given Perplexity's model of synthesizing and presenting AI-generated answers rather than merely hosting third-party content, the case may produce significant doctrine on the ICP status of generative AI search engines and the applicability of product liability and speech-tort theories to AI answer engines.
View on CourtListener →State of Texas v. Snap Inc.
Issue: Whether Snap may remove to federal court under the federal officer removal statute, and whether the First Amendment and Section 230 constitute colorable federal defenses against Texas DTPA and SCOPE Act claims targeting Snapchat's content ratings, safety disclosures, and parental control obligations.
Why It Matters: This case presents a significant intersection of First Amendment compelled-speech doctrine and state child-safety platform regulation, directly implicating the Moody v. NetChoice framework as applied to disclosure and content-rating mandates; the explicit invocation of Section 230 as a colorable federal defense to state consumer protection claims targeting platform safety representations also tracks the growing debate over whether Section 230 and First Amendment defenses can preempt state AG enforcement actions aimed at platform design and content policies.
View on CourtListener →Chicken Soup for the Soul, LLC v. Anthropic PBC
Issue: Whether the unauthorized downloading and reproduction of copyrighted books from shadow-library repositories (including LibGen, Z-Library, Books3/The Pile, and Anna's Archive) to train and optimize commercial large language models constitutes willful copyright infringement under the Copyright Act, actionable by the copyright owner against multiple AI developers including Anthropic, Google, OpenAI, Meta, xAI, Perplexity, Apple, and NVIDIA.
Why It Matters: This complaint is notable for framing industry-wide AI training practices as a coordinated, cascading pattern of willful infringement rather than isolated conduct, and for the plaintiff's deliberate rejection of class-action treatment as a mechanism it characterizes as systematically undervaluing individual copyright claims against AI developers. If litigated to verdict, it could produce the first jury-assessed statutory damages award — potentially at the willful-infringement ceiling — against multiple major AI companies for training-data copyright claims, establishing a damages benchmark that would significantly complicate the class settlement framework currently emerging in related litigation.
View on CourtListener →Doe 1 v. X.AI Corp.
Why It Matters: This motion signals the emergence of parallel, coordinated class action litigation against a generative AI developer premised on product liability and tort theories for AI-generated nonconsensual intimate imagery, with the consolidation effort potentially positioning a single court to develop unified precedent on whether strict liability design-defect and negligence frameworks apply to generative AI outputs.
View on CourtListener →Why It Matters: This complaint represents one of the first attempts to impose direct federal CSAM statutory liability on a generative AI developer as an alleged producer and distributor—rather than merely a passive platform—based on the model's own output, a theory that, if accepted, could establish that AI-generated content triggers the same strict civil liability framework as human-produced CSAM and that deliberate omission of industry-standard safety guardrails constitutes an actionable design defect exposing AI developers to both tort and federal criminal-analog civil damages.
View on CourtListener →Beltran v. Meta Platforms, Inc.
Issue: Whether Meta Platforms, Inc., Sama, and Luxottica violated the federal Wiretap Act (ECPA), California's Invasion of Privacy Act, and multiple state consumer protection statutes by capturing, transmitting, and routing to third-party human annotators the private audiovisual recordings of Meta AI Glasses users without their informed consent, while affirmatively marketing the device as "designed for privacy" and "built for your privacy."
Why It Matters: This complaint presents an early test of civil liability exposure for AI hardware developers whose training-data pipelines involve undisclosed human review of sensitive user-generated recordings, potentially establishing that wiretapping and consumer protection statutes apply to wearable AI devices that funnel private audiovisual data to offshore annotators without adequate disclosure. The case may also signal growing judicial and legislative scrutiny of the intersection between AI training data collection practices and informed-consent requirements under both federal and state privacy law.
View on CourtListener →Netchoice, LLC v. Bonta
Issue: Whether California's Age-Appropriate Design Code Act (CAADCA), Cal. Civ. Code §§ 1798.99.28–1798.99.40, facially violates the First Amendment through its coverage definition, age estimation requirement, data use restrictions, and dark patterns prohibition, as evaluated under the *Moody v. NetChoice* standard for facial challenges.
Why It Matters: The decision reinforces that First Amendment facial challengers—including sophisticated litigants like NetChoice—bear a demanding burden under *Moody* to build a record mapping a law's full set of applications before courts can measure unconstitutional uses against the statute's legitimate sweep, effectively raising the evidentiary threshold for pre-enforcement facial injunctions against online child-safety laws. The ruling also signals that states retain meaningful room to enact children's digital privacy legislation, at least where challengers cannot demonstrate facial invalidity across a substantial majority of the law's applications.
View on CourtListener →Amazon.com Services, LLC v. Perplexity AI, Inc.
Issue: In *Amazon.com Services, LLC v. Perplexity AI, Inc.*, the ACLU, ACLU of Northern California, and Knight First Amendment Institute argue that the Computer Fraud and Abuse Act does not reach an AI-powered browser that accesses platform data on behalf of authenticated, consenting users. The brief presses the non-obvious question of whether a platform's unilateral cease-and-desist letter can convert user-delegated access into criminal unauthorized access — and whether any CFAA construction that permits platforms to define their own liability triggers by sending demand letters would unconstitutionally chill automated journalism and public-interest research.
Why It Matters: This brief pushes the Ninth Circuit toward a significant doctrinal extension of *hiQ Labs* — moving that decision's public-data logic into the contested terrain of authenticated, user-delegated AI agent access, a question no circuit has cleanly resolved. If the court accepts the user-authorization-as-delegation framework, it would effectively insulate a broad class of AI browsing and research tools from CFAA liability so long as they operate with a user's credentials and consent. The brief's treatment of *Facebook v. Power Ventures* is the argument's most vulnerable point: that decision specifically permitted CFAA liability to attach after an individualized cease-and-desist, and Amazon's stronger theory — that Perplexity was never independently authorized in the first place — maps more naturally onto *Power Ventures* than amici acknowledge. The constitutional avoidance thread is nonetheless significant: even if the textual argument fails, a ruling that endorses the chilling-effect analysis could constrain how broadly any CFAA holding is written. The case is worth watching as an early test of how appellate courts will apply *Van Buren*'s gates-up/down framework to AI agents acting on behalf of human users.
View on CourtListener →