Browse Cases
77 resultsE.S. v. Character Technologies, Inc.
Issue: Whether Character Technologies, Inc., its individual founders, and Google/Alphabet are strictly liable under product liability theories of design defect and failure to warn—and subject to additional tort, COPPA, and state consumer-protection claims—for physical and psychological injuries suffered by a minor user of the Character.AI generative AI platform.
Why It Matters: By affirmatively pleading that C.AI's outputs are the product of Defendants' own design choices rather than third-party content, the complaint is structured to foreclose a Section 230(c)(1) immunity defense from the outset, potentially advancing the theory that AI-generated outputs are first-party "products" subject to traditional tort liability rather than publisher immunity—a framing that, if accepted, could establish a significant precedent for imposing product liability on generative AI systems and their developers.
View on CourtListener →Montoya v. Character Technologies, Inc.
Why It Matters: This case represents one of a growing wave of civil actions seeking to impose product liability and tort duties directly on AI platform developers and their corporate parents for harms allegedly caused by AI-generated interactions, and may advance the question of whether AI conversational systems constitute "products" subject to design defect and failure-to-warn theories under applicable state law.
View on CourtListener →Why It Matters: This complaint represents continued development of the AI chatbot liability landscape following Garcia's watershed holding that AI-generated outputs may not receive automatic First Amendment protection and that product liability claims can survive Section 230 motions when framed around architectural design rather than third-party content. The Colorado filing extends the geographic and judicial reach of these novel theories, potentially creating additional precedent on whether LLM-generated speech constitutes a "product" subject to traditional tort frameworks and whether platforms can invoke constitutional speech defenses at the pleading stage.
View on CourtListener →Why It Matters: The complaint's explicit pleading that C.AI's harmful outputs are the product of Defendants' own programming decisions—not third-party content—appears strategically crafted to foreclose a Section 230 defense, potentially advancing the theory that AI-generated outputs are manufacturer speech subject to product liability rather than platform-hosted user content.
View on CourtListener →Encyclopaedia Britannica, Inc. v. Perplexity AI, Inc.
Issue: Whether Perplexity AI's automated answer engine, which generates verbatim or near-verbatim reproductions of copyrighted content in response to user-directed queries, constitutes "volitional conduct" by Perplexity sufficient to support direct copyright infringement liability under 17 U.S.C. § 106, as governed by the Second Circuit's *Cablevision* volitional-conduct doctrine.
Why It Matters: This motion squarely presents to a federal court the question of whether the *Cablevision* volitional-conduct doctrine—developed in the context of automated cable DVR systems—extends to shield generative AI answer engines from direct copyright infringement liability when their outputs reproduce third-party copyrighted material at a user's explicit direction. The court's ruling could establish a significant precedent governing the allocation of direct infringement liability between AI platform operators and their users across the rapidly expanding universe of RAG-based generative AI products.
View on CourtListener →Why It Matters: This exhibit directly advances the question of whether AI-generated content that is sexually explicit and directed at a minor — produced autonomously by a large language model without direct human authorship — can ground product liability or speech tort claims against the developer, a question with significant implications for how courts will categorize AI outputs (as "speech" protected or immunized, or as a defective product) and for the scope of Section 230 immunity in cases involving AI-generated rather than third-party content.
View on CourtListener →Why It Matters: This exhibit is significant because it provides direct documentary evidence that Character.AI's system both generated child-directed sexual content and possessed an internal moderation mechanism that identified the content as violative yet failed to halt generation — a factual record that could simultaneously support design defect claims (the safeguard was inadequate) and undermine any argument that harmful outputs were unforeseeable, potentially limiting the scope of any §230 defense the platform might raise.
View on CourtListener →Why It Matters: Filed as an exhibit rather than an opinion, this document supplies factual predicate for design-defect and failure-to-warn claims against an AI chatbot platform, potentially advancing the question of whether AI systems that generate harmful interactive content — and the companies that deploy them — can be held liable under traditional products liability frameworks when those systems foreseeably expose minors to sexual exploitation.
View on CourtListener →Garcia v. Character Technologies, Inc.
Why It Matters: This complaint is significant because it represents a direct attempt to apply traditional products liability frameworks—design defect and failure to warn—to a generative AI system, treating the AI chatbot as a manufactured product rather than a publisher of third-party speech, and it proactively pleads around Section 230 immunity by characterizing the AI as a first-party content generator, a theory that, if credited by the court, could substantially expand tort exposure for AI developers.
View on CourtListener →Why It Matters: This case directly tests whether traditional product liability frameworks — design defect and failure to warn — can be applied to a generative AI chatbot, potentially establishing that AI systems are "products" subject to strict liability rather than services entitled to speech-based or Section 230 protections. The complaint's explicit characterization of C.AI as an information content provider whose own-generated outputs caused harm, rather than a platform hosting third-party content, represents a deliberate litigation strategy to foreclose Section 230 immunity and could shape how courts classify AI-generated content for liability purposes.
View on CourtListener →Why It Matters: This complaint is among the first to assert traditional products liability theories—design defect and failure to warn—directly against a generative AI system and its developers, and its explicit characterization of C.AI as an information content provider rather than a neutral platform signals a deliberate litigation strategy to foreclose Section 230 immunity, which could establish a significant template for future AI tort suits if the framing survives judicial scrutiny.
View on CourtListener →Why It Matters: This report represents a significant moment in the effort to establish that products liability design defect doctrine applies to social media platform architecture — a theory that, if credited at summary judgment, would move the litigation past the threshold question of legal viability and into full merits adjudication. The feasibility argument is particularly consequential: by grounding safer alternative design in real-world commercial comparators that predated the alleged harm period, Plaintiffs aim to foreclose any claim of technological impossibility as a matter of law, converting feasibility into a jury question. Two open doctrinal questions hang over the report's reception: whether courts will apply a minor-specific risk-utility standard for engagement features that serve adult users while foreseeably harming children, and whether COPPA compliance functions as a regulatory floor or a safe harbor that displaces common law claims — neither of which has been definitively resolved in this MDL. The report's individual causation gap and its use of Estes's own platform as a feasibility comparator are predictable pressure points that Defendants will likely press in both Daubert proceedings and in reply briefing.
View on CourtListener →Why It Matters: As a pretrial exhibit list rather than a ruling or substantive motion, this document does not advance legal doctrine; however, the categories of exhibits—particularly school financial records, pre-existing behavioral data, and district technology and digital-citizenship plans—signal that Defendants intend to contest causation and damages by attributing student mental-health and behavioral issues to pre-existing institutional, socioeconomic, and pandemic-related factors rather than to platform design.
View on CourtListener →Why It Matters: This witness list signals that defendants' trial strategy will center on contesting general and specific causation through scientific experts while affirmatively presenting evidence of platform safety efforts, positioning the case as a significant test of whether product liability theories can survive against social media platforms when defendants offer robust alternative-cause and reasonable-design defenses in the school-district plaintiff context.
View on CourtListener →Why It Matters: The breadth and specificity of the exhibit list signals that plaintiffs intend to prove at trial that Meta possessed extensive internal knowledge of harms its platforms caused to adolescent users, which could be significant for establishing the knowledge and design-defect elements of product liability claims that courts in this MDL have allowed to proceed notwithstanding Section 230 immunity arguments.
View on CourtListener →Why It Matters: This document is significant because it reveals how §230 and First Amendment protections will be operationalized at the jury instruction level in the first bellwether trial of a major social media addiction MDL, effectively showing which platform design features a court has already ruled immune from tort liability; the outcome could establish a concrete, feature-by-feature framework for distinguishing actionable product design claims from immunized publishing decisions that other courts and litigants could adopt or contest in future platform liability litigation.
View on CourtListener →Why It Matters: The motion presents a significant question about whether Section 230 immunity can be invoked not only to defeat substantive liability claims but also to exclude expert damages methodologies that treat a platform's publication of third-party content as the predicate "violation" for penalty calculation purposes, potentially extending §230's reach into the evidentiary phase of litigation. If the court grants exclusion on this ground, it would signal that plaintiffs in platform-liability cases must carefully disaggregate algorithmic and design conduct from publishing conduct even at the damages-quantification stage.
View on CourtListener →