Browse Cases
207 resultsNetChoice v. Ellison
Issue: Whether Minnesota's proposed statutory restrictions on social media platform design features — including algorithmic amplification, engagement-based optimization, and "deceptive patterns" targeting minors — violate the First Amendment's prohibitions on compelled speech and forced hosting of third-party content.
Why It Matters: The report is significant as an exhibit because it reveals the state's own regulatory theory — that platform liability should attach to *design functions* rather than *content* — a distinction the AG explicitly frames as the constitutionally safer path in light of prior court decisions striking down content-based online speech laws, and which NetChoice is apparently contesting as insufficient to avoid First Amendment scrutiny.
View on CourtListener →Media Matters for America v. Warren Paxton, Jr.
Issue: Whether the Texas Attorney General's investigation and civil investigative demand targeting Media Matters for America violated the First Amendment by constituting retaliatory government action in response to the organization's critical reporting about X (Twitter) and Elon Musk.
Why It Matters: This case directly applies Bantam Books and Backpage.com v. Dart jawboning doctrine to state attorney general investigations of media organizations covering technology platforms. It establishes that investigative demands issued in apparent retaliation for critical reporting about politically connected platform owners constitute actionable First Amendment violations, extending constitutional constraints on government use of regulatory process to chill platform-related journalism and reinforcing limits on government-platform coordination to suppress critical speech.
View on CourtListener →Little v. Llano County
Issue: Insufficient text to determine. (This document is a New York state criminal appeal concerning a guilty plea, waiver of appeal rights, and suppression hearing forfeiture — it bears no relationship to the labeled case *Little v. Llano County* or to First Amendment law, Section 230, or AI/ML civil liability.)
Why It Matters: Insufficient text to determine. This decision addresses New York criminal procedure — specifically the validity of appeal waivers and suppression hearing forfeiture rules — and contains no analysis relevant to platform liability, First Amendment doctrine as applied to technology or public institutions, Section 230, or AI/ML regulation.
View on CourtListener →Angelilli v. Activision Blizzard, Inc., 2025 WL 1181000
Issue: Whether § 230 bars claims that Activision's online gaming platform facilitated harassment and harmful conduct directed at plaintiff through features of its Call of Duty game and matchmaking system.
Why It Matters: Part of the emerging litigation testing the scope of § 230 in the online gaming context, where platform design choices about matchmaking, anonymity, and in-game communication systems intersect with severe harassment. Related to the companion decision in the same matter, 2025 WL 1184247.
View on CourtListener →Angelilli v. Activision Blizzard, Inc., 2025 WL 1184247
Issue: Whether § 230 bars related claims arising from Activision's alleged failure to implement effective anti-harassment systems and safety features in the Call of Duty online platform, following the companion decision in 2025 WL 1181000.
Why It Matters: Companion to 2025 WL 1181000, together comprising the district court's full § 230 analysis of platform liability in the online gaming context. The pair of decisions addresses a relatively underexplored area of § 230 doctrine — the application of the statute to gaming platforms — and may be influential in subsequent litigation involving harassment on online multiplayer platforms.
View on CourtListener →Fletcher v. Facebook, Inc.
Issue: Whether Facebook operates as a state actor subject to First Amendment constraints when terminating user access, either because it constitutes a public forum or because it acted under government coercion or direction.
Why It Matters: This complaint illustrates the continued assertion of public forum and state action theories against platforms post-Packingham, despite contrary controlling authority in Manhattan Community Access v. Halleck and Prager University v. Google establishing that private platforms are not state actors. The government coercion allegations invoke the framework from Murthy v. Missouri and Bantam Books, but the complaint's broad, conclusory assertions about government "coercion" and "direction" without specific factual allegations illustrate the demanding causation and traceability standards Murthy established for jawboning claims.
View on CourtListener →Rosenblum v. Passes, Inc.
Issue: Whether Section 230 of the Communications Decency Act immunizes Passes, Inc. from liability for child sexual abuse material (CSAM) where plaintiff alleges the platform's agents actively solicited a minor to join the platform and then marketed and distributed the resulting CSAM.
Why It Matters: This case presents a potentially significant challenge to Section 230's scope in CSAM cases by alleging that platform agents' active recruitment and marketing of a minor creator transforms the platform from a passive host into a content developer or co-creator. If the material contribution theory survives the motion to dismiss, it could narrow Section 230 immunity for platforms whose employees or agents allegedly facilitate the creation or distribution of illegal content, particularly involving minors—extending the "content developer" exception beyond algorithmic design to direct human agency and solicitation.
View on CourtListener →Trump Media & Technology Group Corp. v. De Moraes
Issue: Whether a Brazilian Supreme Court justice's orders requiring U.S.-based social media platforms to suspend user accounts and censor content accessible in the United States are enforceable under U.S. law, or whether they violate the First Amendment and conflict with the Communications Decency Act.
Why It Matters: This case presents a novel collision between foreign government content removal orders and U.S. platforms' First Amendment rights to resist compelled censorship. It could establish important precedent on whether U.S. courts will recognize foreign judicial orders as unconstitutional "jawboning" when they compel platforms to suppress lawful political speech accessible to American users, and may clarify the territorial limits of foreign content regulation authority over U.S.-based intermediaries.
View on CourtListener →Doe v. Grindr Inc.
Why It Matters: Insufficient text to determine — this document is misfiled or misattributed and presents no holding, argument, or procedural development pertinent to Section 230 immunity, First Amendment platform doctrine, or civil liability for AI/ML systems.
View on CourtListener →Why It Matters: Represents the Ninth Circuit's return to the Grindr platform years after Herrick v. Grindr in the Second Circuit. The case tests the reach of the Lemmon design-defect doctrine in a non-speed-filter context — specifically, whether geolocation and identity features of a dating app constitute the platform's own product conduct. The interplay between this Ninth Circuit decision and Herrick in the Second Circuit reflects the ongoing circuit-level divergence on platform liability for design features that enable offline harm.
View on CourtListener →Karam v. Meta Platforms, Inc.
Issue: Whether Section 230 bars claims against Meta arising from the company's decision to ban or restrict plaintiff's Facebook account and its alleged failure to prevent other users from posting content about plaintiff.
Why It Matters: This decision reinforces the broad application of Section 230 immunity to platform account termination and content moderation decisions, extending publisher immunity not only to third-party content but also to the platform's own editorial decisions about which users may access its services. The ruling demonstrates courts' continued willingness to apply Section 230 at the motion to dismiss stage to bar claims challenging fundamental platform curation functions including account access decisions.
View on CourtListener →Students Engaged in Advancing Texas v. Ken Paxton, Attorney General, State of Texas
Issue: Whether Texas HB18, a state law regulating social media platforms' content moderation and targeted advertising practices directed at minors, violates the First Amendment and is preempted by Section 230.
Why It Matters: This appeal presents a post-Moody test case for state regulation of social media platforms' treatment of minors and targeted advertising practices. The Fifth Circuit's resolution will clarify how Moody's framework for evaluating must-carry and content moderation mandates applies to age-based restrictions and commercial speech regulations, and whether Section 230 preempts state laws targeting platform design features and advertising practices rather than third-party content liability.
View on CourtListener →M.P. v. Meta Platforms, Inc.
Issue: Whether § 230 bars claims that Meta's recommendation algorithms and design features facilitated the sexual exploitation of a minor by connecting the minor with an adult abuser on Instagram.
Why It Matters: An important Fourth Circuit decision on § 230 in the child sexual exploitation context, adding to the developing circuit-level body of law on whether design-defect theories and algorithm-based claims survive § 230 dismissal. The decision is significant for the wave of CSAM and child exploitation litigation against social media platforms pending in multiple circuits.
View on CourtListener →Patterson v. Meta Platforms, Inc.
Issue: Whether New York state law claims against Meta arising from the platform's design and content recommendation features are preempted by § 230(e)(3) or otherwise barred as publisher-based liability.
Why It Matters: An important state-court application of § 230 preemption doctrine and the design-defect framework. The New York Appellate Division's analysis contributes to the growing body of state appellate authority on § 230 preemption and is significant for ongoing multi-district litigation against Meta in both state and federal courts.
View on CourtListener →Why It Matters: This exhibit directly advances the question of whether AI-generated content that is sexually explicit and directed at a minor — produced autonomously by a large language model without direct human authorship — can ground product liability or speech tort claims against the developer, a question with significant implications for how courts will categorize AI outputs (as "speech" protected or immunized, or as a defective product) and for the scope of Section 230 immunity in cases involving AI-generated rather than third-party content.
View on CourtListener →Why It Matters: This exhibit is significant because it provides direct documentary evidence that Character.AI's system both generated child-directed sexual content and possessed an internal moderation mechanism that identified the content as violative yet failed to halt generation — a factual record that could simultaneously support design defect claims (the safeguard was inadequate) and undermine any argument that harmful outputs were unforeseeable, potentially limiting the scope of any §230 defense the platform might raise.
View on CourtListener →Why It Matters: Filed as an exhibit rather than an opinion, this document supplies factual predicate for design-defect and failure-to-warn claims against an AI chatbot platform, potentially advancing the question of whether AI systems that generate harmful interactive content — and the companies that deploy them — can be held liable under traditional products liability frameworks when those systems foreseeably expose minors to sexual exploitation.
View on CourtListener →Garcia v. Character Technologies, Inc.
Why It Matters: This complaint is significant because it represents a direct attempt to apply traditional products liability frameworks—design defect and failure to warn—to a generative AI system, treating the AI chatbot as a manufactured product rather than a publisher of third-party speech, and it proactively pleads around Section 230 immunity by characterizing the AI as a first-party content generator, a theory that, if credited by the court, could substantially expand tort exposure for AI developers.
View on CourtListener →Why It Matters: This case directly tests whether traditional product liability frameworks — design defect and failure to warn — can be applied to a generative AI chatbot, potentially establishing that AI systems are "products" subject to strict liability rather than services entitled to speech-based or Section 230 protections. The complaint's explicit characterization of C.AI as an information content provider whose own-generated outputs caused harm, rather than a platform hosting third-party content, represents a deliberate litigation strategy to foreclose Section 230 immunity and could shape how courts classify AI-generated content for liability purposes.
View on CourtListener →