Browse Cases
142 resultsMontoya v. Character Technologies, Inc.
Issue: Whether Character Technologies, Inc., its individual founders, and Google/Alphabet are strictly liable under product liability theories of design defect and failure to warn, and liable in negligence, for the death of a 13-year-old minor caused by alleged harmful design choices embedded in the Character.AI large language model.
Why It Matters: The complaint's explicit pleading that C.AI's harmful outputs are the product of Defendants' own programming decisions—not third-party content—appears strategically crafted to foreclose a Section 230 defense, potentially advancing the theory that AI-generated outputs are manufacturer speech subject to product liability rather than platform-hosted user content.
View on CourtListener →Encyclopaedia Britannica, Inc. v. Perplexity AI, Inc.
Issue: Whether Perplexity AI's automated answer engine, which generates verbatim or near-verbatim reproductions of copyrighted content in response to user-directed queries, constitutes "volitional conduct" by Perplexity sufficient to support direct copyright infringement liability under 17 U.S.C. § 106, as governed by the Second Circuit's *Cablevision* volitional-conduct doctrine.
Why It Matters: This motion squarely presents to a federal court the question of whether the *Cablevision* volitional-conduct doctrine—developed in the context of automated cable DVR systems—extends to shield generative AI answer engines from direct copyright infringement liability when their outputs reproduce third-party copyrighted material at a user's explicit direction. The court's ruling could establish a significant precedent governing the allocation of direct infringement liability between AI platform operators and their users across the rapidly expanding universe of RAG-based generative AI products.
View on CourtListener →Doe v. Discord, Inc.
Issue: Doe v. Discord, Inc.* asks whether 47 U.S.C. § 230(c)(1) immunizes a social media platform from state-law claims arising from the sexual exploitation of a minor user, when the plaintiff frames those claims not merely as failures to moderate content but as independent product-design defects, failure-to-warn violations, and misrepresentations about platform safety. The question is sharpened by the plaintiff's deliberate pleading strategy of recasting monitoring-and-blocking duties under product-liability and tort labels — an approach that has survived § 230 challenges in some courts — and by Discord's specific marketing representations about user safety directed at minors and their families.
Why It Matters: This ruling reinforces § 230's breadth in the Sixth Circuit by applying the *Jones* framework with particular rigor to a child-safety fact pattern, directly rejecting the product-liability recharacterization strategy that plaintiffs in platform-harm litigation have increasingly deployed to escape immunity. The decision supplies the Northern District of Ohio's most detailed analysis of the *Barnes* promissory-estoppel exception, drawing an explicit line between aspirational corporate safety messaging — which cannot anchor a surviving misrepresentation claim — and specific, individualized promises that could. It also creates a meaningful doctrinal gap with the Ninth Circuit's *Lemmon v. Snap* line, which permits negligent-design claims to proceed when a platform feature is treated as the defendant's own expressive conduct rather than third-party content moderation, a tension the Sixth Circuit has not yet resolved. The with-prejudice dismissal signals that courts applying *Jones* are unlikely to permit iterative re-pleading aimed at constructing a § 230-surviving theory after the gravamen of the complaint targets moderation.
View on CourtListener →Angelilli v. Activision Blizzard, Inc., 2025 WL 1181000
Issue: Whether § 230 bars claims that Activision's online gaming platform facilitated harassment and harmful conduct directed at plaintiff through features of its Call of Duty game and matchmaking system.
Why It Matters: Part of the emerging litigation testing the scope of § 230 in the online gaming context, where platform design choices about matchmaking, anonymity, and in-game communication systems intersect with severe harassment. Related to the companion decision in the same matter, 2025 WL 1184247.
View on CourtListener →Angelilli v. Activision Blizzard, Inc., 2025 WL 1184247
Issue: Whether § 230 bars related claims arising from Activision's alleged failure to implement effective anti-harassment systems and safety features in the Call of Duty online platform, following the companion decision in 2025 WL 1181000.
Why It Matters: Companion to 2025 WL 1181000, together comprising the district court's full § 230 analysis of platform liability in the online gaming context. The pair of decisions addresses a relatively underexplored area of § 230 doctrine — the application of the statute to gaming platforms — and may be influential in subsequent litigation involving harassment on online multiplayer platforms.
View on CourtListener →Rosenblum v. Passes, Inc.
Issue: Whether Section 230 of the Communications Decency Act immunizes Passes, Inc. from liability for child sexual abuse material (CSAM) where plaintiff alleges the platform's agents actively solicited a minor to join the platform and then marketed and distributed the resulting CSAM.
Why It Matters: This case presents a potentially significant challenge to Section 230's scope in CSAM cases by alleging that platform agents' active recruitment and marketing of a minor creator transforms the platform from a passive host into a content developer or co-creator. If the material contribution theory survives the motion to dismiss, it could narrow Section 230 immunity for platforms whose employees or agents allegedly facilitate the creation or distribution of illegal content, particularly involving minors—extending the "content developer" exception beyond algorithmic design to direct human agency and solicitation.
View on CourtListener →Trump Media & Technology Group Corp. v. De Moraes
Issue: Whether a Brazilian Supreme Court justice's orders requiring U.S.-based social media platforms to suspend user accounts and censor content accessible in the United States are enforceable under U.S. law, or whether they violate the First Amendment and conflict with the Communications Decency Act.
Why It Matters: This case presents a novel collision between foreign government content removal orders and U.S. platforms' First Amendment rights to resist compelled censorship. It could establish important precedent on whether U.S. courts will recognize foreign judicial orders as unconstitutional "jawboning" when they compel platforms to suppress lawful political speech accessible to American users, and may clarify the territorial limits of foreign content regulation authority over U.S.-based intermediaries.
View on CourtListener →Doe v. Grindr Inc.
Why It Matters: Insufficient text to determine — this document is misfiled or misattributed and presents no holding, argument, or procedural development pertinent to Section 230 immunity, First Amendment platform doctrine, or civil liability for AI/ML systems.
View on CourtListener →Why It Matters: Represents the Ninth Circuit's return to the Grindr platform years after Herrick v. Grindr in the Second Circuit. The case tests the reach of the Lemmon design-defect doctrine in a non-speed-filter context — specifically, whether geolocation and identity features of a dating app constitute the platform's own product conduct. The interplay between this Ninth Circuit decision and Herrick in the Second Circuit reflects the ongoing circuit-level divergence on platform liability for design features that enable offline harm.
View on CourtListener →Karam v. Meta Platforms, Inc.
Issue: Whether Section 230 bars claims against Meta arising from the company's decision to ban or restrict plaintiff's Facebook account and its alleged failure to prevent other users from posting content about plaintiff.
Why It Matters: This decision reinforces the broad application of Section 230 immunity to platform account termination and content moderation decisions, extending publisher immunity not only to third-party content but also to the platform's own editorial decisions about which users may access its services. The ruling demonstrates courts' continued willingness to apply Section 230 at the motion to dismiss stage to bar claims challenging fundamental platform curation functions including account access decisions.
View on CourtListener →M.P. v. Meta Platforms, Inc.
Issue: Whether § 230 bars claims that Meta's recommendation algorithms and design features facilitated the sexual exploitation of a minor by connecting the minor with an adult abuser on Instagram.
Why It Matters: An important Fourth Circuit decision on § 230 in the child sexual exploitation context, adding to the developing circuit-level body of law on whether design-defect theories and algorithm-based claims survive § 230 dismissal. The decision is significant for the wave of CSAM and child exploitation litigation against social media platforms pending in multiple circuits.
View on CourtListener →Patterson v. Meta Platforms, Inc.
Issue: Whether New York state law claims against Meta arising from the platform's design and content recommendation features are preempted by § 230(e)(3) or otherwise barred as publisher-based liability.
Why It Matters: An important state-court application of § 230 preemption doctrine and the design-defect framework. The New York Appellate Division's analysis contributes to the growing body of state appellate authority on § 230 preemption and is significant for ongoing multi-district litigation against Meta in both state and federal courts.
View on CourtListener →Why It Matters: This exhibit directly advances the question of whether AI-generated content that is sexually explicit and directed at a minor — produced autonomously by a large language model without direct human authorship — can ground product liability or speech tort claims against the developer, a question with significant implications for how courts will categorize AI outputs (as "speech" protected or immunized, or as a defective product) and for the scope of Section 230 immunity in cases involving AI-generated rather than third-party content.
View on CourtListener →Why It Matters: This exhibit is significant because it provides direct documentary evidence that Character.AI's system both generated child-directed sexual content and possessed an internal moderation mechanism that identified the content as violative yet failed to halt generation — a factual record that could simultaneously support design defect claims (the safeguard was inadequate) and undermine any argument that harmful outputs were unforeseeable, potentially limiting the scope of any §230 defense the platform might raise.
View on CourtListener →Why It Matters: Filed as an exhibit rather than an opinion, this document supplies factual predicate for design-defect and failure-to-warn claims against an AI chatbot platform, potentially advancing the question of whether AI systems that generate harmful interactive content — and the companies that deploy them — can be held liable under traditional products liability frameworks when those systems foreseeably expose minors to sexual exploitation.
View on CourtListener →Garcia v. Character Technologies, Inc.
Why It Matters: This complaint is significant because it represents a direct attempt to apply traditional products liability frameworks—design defect and failure to warn—to a generative AI system, treating the AI chatbot as a manufactured product rather than a publisher of third-party speech, and it proactively pleads around Section 230 immunity by characterizing the AI as a first-party content generator, a theory that, if credited by the court, could substantially expand tort exposure for AI developers.
View on CourtListener →Why It Matters: This case directly tests whether traditional product liability frameworks — design defect and failure to warn — can be applied to a generative AI chatbot, potentially establishing that AI systems are "products" subject to strict liability rather than services entitled to speech-based or Section 230 protections. The complaint's explicit characterization of C.AI as an information content provider whose own-generated outputs caused harm, rather than a platform hosting third-party content, represents a deliberate litigation strategy to foreclose Section 230 immunity and could shape how courts classify AI-generated content for liability purposes.
View on CourtListener →Why It Matters: This complaint is among the first to assert traditional products liability theories—design defect and failure to warn—directly against a generative AI system and its developers, and its explicit characterization of C.AI as an information content provider rather than a neutral platform signals a deliberate litigation strategy to foreclose Section 230 immunity, which could establish a significant template for future AI tort suits if the framing survives judicial scrutiny.
View on CourtListener →Stebbins v. Rumble Inc.
Issue: In *Stebbins v. Rumble Inc.*, plaintiff David Stebbins argues that a statement Rumble made in a related miscellaneous proceeding — acknowledging an editorial decision to permit anonymous posting — constitutes newly discovered evidence sufficient under FRCP 60(b)(2) to reopen the court's prior dismissal of Rumble as a defendant. The non-obvious dimension is whether a platform's litigation statement made to *resist* a third-party subpoena on First Amendment grounds can be repurposed as an affirmative admission of tortious editorial control, and whether such an admission could itself defeat § 230 immunity by recharacterizing a general anonymity policy as the platform's "own conduct" causally contributing to the alleged harm.
Why It Matters: This motion illustrates a strategy plaintiffs have repeatedly attempted with limited success: taking a platform's statement made in an unrelated legal context to protect its users and repackaging it as a confession of liability. The legal obstacle is twofold — courts have consistently treated decisions about anonymous posting as quintessential editorial functions protected by § 230, and statements made to assert a procedural or constitutional right are not equivalent to admissions of underlying tortious conduct. The motion also tests the outer boundary of the "platform's own conduct" exception established in cases like *Roommates.com*: whether a documented platform policy enabling anonymity could ever constitute material contribution to the *unlawfulness* of specific content, rather than merely to its delivery — a question that remains theoretically open but has yet to find a receptive court on analogous facts. More broadly, the filing is a useful marker of how the procedural vehicle of FRCP 60(b) is being used in pro se platform-liability litigation to challenge interlocutory § 230 dismissals, a recurring posture that existing doctrinal commentary has not yet systematically addressed.
View on CourtListener →Computer & Comm v. Paxton
Issue: Whether Texas House Bill 18's requirements that covered digital service providers monitor and block broadly defined categories of content accessible to minors violate the First Amendment as content-based and viewpoint-based prior restraints on protected speech, and whether those requirements are preempted by 47 U.S.C. § 230.
Why It Matters: The case presents a direct First Amendment challenge to state-mandated content filtering for minors—an emerging category of legislation enacted across multiple states—and the Fifth Circuit's ruling could establish binding precedent on whether such monitoring-and-blocking mandates survive strict scrutiny and on the scope of § 230 preemption of state child-safety internet laws.
View on CourtListener →