Browse Cases

137 results
Clear
First Amendment
Filing Section 230 First Amendment

Dowey v. Siems

District Court, D. Delaware · 2026-03-01 · Meta Platforms, Inc. (Instagram and Facebook)

Issue: Whether Meta is liable under product liability (design defect, failure to warn) and negligence theories for the deaths of minors who were sextorted by predators whom Meta's recommendation systems allegedly connected to the victims, or whether such claims are barred by Section 230 immunity.

Why It Matters: This case directly tests the boundaries of Section 230's design-defect carve-out post-*Moody v. NetChoice* and in light of the Supreme Court's non-decision in *Gonzalez v. Google*. Plaintiffs invoke the emerging theory—successful in *Garcia v. Character.AI*—that platform architectural choices, recommendation algorithms, and data-sharing features constitute the platform's own product design decisions outside Section 230's scope, particularly where the platform allegedly knew its systems were connecting minors to predators and declined to implement identified safeguards. If the court permits these claims to proceed past a motion to dismiss, it would reinforce a narrowing of Section 230 immunity for algorithmic harms and establish that platforms face tort exposure for design decisions that foreseeably facilitate criminal exploitation, even when the harmful content itself is user-generated.

View on CourtListener →
AI Liability

Williams v. Anthropic PBC

District Court, S.D. New York · 2 filings
2026-02-25 · Complaint

Why It Matters: Insufficient text to determine. --- > **Note:** The document transmitted contains only page-header placeholders ("Case 1:26-cv-01566-JLR Document 1 Filed 02/25/26 Page X of 25") and no substantive text — no allegations, causes of action, parties' arguments, or judicial rulings. Because the actual content of the complaint was not included in the provided text, none of the three fields can be completed accurately based solely on the document. To generate a proper summary, please resubmit with the full extracted text of the filing.

View on CourtListener →
2026-02-25 · Complaint

Why It Matters: Insufficient text to determine — while the broad joinder of major AI developers, cloud infrastructure providers, and data-aggregation companies in a single action may signal a wide-ranging AI liability theory, the summons alone provides no basis to assess what legal questions are advanced or what precedent the case might set.

View on CourtListener →
Opinion First Amendment

Armendariz v. City of Colorado Springs

Court of Appeals for the Tenth Circuit · 2026-02-24

Issue: Whether search warrants seeking (1) electronic devices and data from a protest organizer and (2) Facebook posts, chats, and events from a nonprofit organization's profile were overbroad in violation of the Fourth Amendment's particularity requirement.

Why It Matters: This case implicates First Amendment associational rights and the limits on government investigation of online platform content related to protest activities. The decision establishes that warrants seeking broad categories of social media data (posts, chats, events) from advocacy organizations may violate Fourth Amendment particularity requirements, with implications for government access to platform-hosted speech and organizing activity. The involvement of major digital rights organizations as amici (EFF, CDT, EPIC, Knight Institute) signals broader concerns about investigatory overreach into digital speech and association.

View on CourtListener →
Brief Section 230 First Amendment Motion to Dismiss

Ballentine v. Meta Platforms, Inc.

District Court, M.D. Florida · 2026-02-17 · Meta (Facebook); Accenture LLP (third-party content moderation vendor)

Issue: Whether Section 230(c)(1) and (c)(2) immunize a third-party content moderation vendor that assisted Meta in reviewing and recommending the termination of a user's Facebook advertising account from civil rights and discrimination claims brought under 42 U.S.C. §§ 1981, 1982, 1983, and 1985(3).

Why It Matters: This case raises the relatively underdeveloped question of whether §230 immunity extends downstream to third-party vendors that perform human content moderation review on behalf of platforms, a question with significant implications for the emerging ecosystem of platform-adjacent moderation contractors; if courts accept Accenture's argument that §230(c)(1) and (c)(2) together shield vendors assisting in publisher decisions, it would substantially insulate the outsourced content moderation industry from civil liability for moderation outcomes.

View on CourtListener →
Brief Section 230 First Amendment Motion to Dismiss

Trupia v. X Corp.

District Court, N.D. Texas · 2026-02-13 · X Corp. (formerly Twitter)

Issue: Whether §230(c)(1) of the Communications Decency Act immunizes X Corp. from civil liability for algorithmically suppressing or "debosting" a user's posts, and whether the First Amendment independently bars claims challenging X Corp.'s editorial decisions to limit content visibility on its platform.

Why It Matters: This motion applies the §230 publisher immunity doctrine and the First Amendment editorial-discretion rationale from *Moody v. NetChoice* to algorithmic content suppression claims by a paying subscriber, potentially reinforcing that neither a paid platform subscription nor executive statements about "free speech" can contractually override §230 immunity or a platform's First Amendment right to moderate content.

View on CourtListener →
Exhibit Section 230 First Amendment Other

Doe v. Meta Platforms, Inc.

District Court, D. Colorado · 2026-02-12 · Meta (Instagram)

Issue: Whether Meta Platforms/Instagram's recommendation algorithm that connected a 13-year-old with an adult sex offender operating a fake account constitutes a product design defect giving rise to tort liability, and whether Section 230 of the Communications Decency Act bars such claims.

Why It Matters: This complaint directly tests whether plaintiffs can characterize Instagram's recommendation algorithm as a defective product—rather than as editorial publishing activity—to circumvent Section 230 immunity, following the analytical framework signaled in *Gonzalez v. Google* and pursued in the state attorneys general social-media litigation; a ruling on Meta's anticipated §230 defense could meaningfully clarify whether algorithmically generated user-to-user recommendations constitute protected publisher functions or actionable product design choices under Colorado law.

View on CourtListener →
Opinion First Amendment Preliminary Injunction

Rosado v. Bondi

District Court, N.D. Illinois · 2026-02-11 · Meta (Facebook), Apple (App Store)

Issue: Rosado v. Bondi* asks whether senior federal officials violated the First Amendment by pressuring Facebook and Apple to remove plaintiffs' content — and whether plaintiffs can establish that the platforms acted *because of* that government pressure rather than for independent editorial reasons. The question is non-obvious because platforms routinely make their own content moderation decisions, making it difficult to trace any specific removal to government coercion rather than the platform's own judgment. The case also tests how far *NRA v. Vullo* (2024) extends: whether official language characterized as "demanding" action and directing that platforms "must be PROACTIVE" crosses the line from permissible government persuasion into unconstitutional coercion.

Why It Matters: This ruling gives content creators and publishers a concrete legal framework for challenging government pressure campaigns against social media platforms — a form of censorship that has been notoriously difficult to litigate because plaintiffs typically cannot prove a platform removed content *because of* the government rather than for its own independent reasons. The court's three-part convergence test — prior platform approval, swift removal following government contact, and officials publicly claiming credit — transforms an abstract constitutional protection into a workable standing roadmap for future jawboning plaintiffs. The ruling is nonetheless vulnerable on appeal: it sits in direct tension with the Supreme Court's causation skepticism in *Murthy v. Missouri* (2024), and the Seventh Circuit may require more granular, plaintiff-specific proof of coercion than this court's convergence framework demands. Critical questions also remain open, including the precise scope of the forthcoming injunction order and whether official public statements urging platform action constitute protected government speech rather than actionable coercion.

View on CourtListener →
Brief First Amendment Section 230 Complaint

Netchoice v. Wilson

District Court, D. South Carolina · 2026-02-09 · NetChoice (trade association representing social media platforms and internet companies)

Issue: Whether the South Carolina Age-Appropriate Code Design Act's requirements that covered online services exercise "reasonable care" to prevent harms to minors, disable certain engagement and discovery features, screen third-party advertising, and submit to third-party audits violate the First Amendment's prohibitions on content-based speech restrictions and compelled speech, are preempted by §230(c)(1) of the Communications Decency Act and COPPA, and violate the Commerce Clause and Due Process Clause.

Why It Matters: This complaint extends a growing line of coordinated First Amendment challenges by NetChoice to state-level online minor-protection laws, directly invoking *Moody v. NetChoice* and Fourth Circuit precedent to argue that platform curation and algorithmic editorial judgment are categorically protected expression, which, if adopted by the court, would significantly constrain states' ability to regulate platform design features affecting speech.

View on CourtListener →
Brief First Amendment Other

NEWSGUARD TECHNOLOGIES v. FEDERAL TRADE COMMISSION

District Court, District of Columbia · 2026-02-06 · NewsGuard Technologies, Inc. (news rating/brand safety service)

Issue: In *NewsGuard Technologies v. FTC*, NewsGuard argues that the FTC's voluntary withdrawal of a Civil Investigative Demand did not moot its First Amendment and APA claims because the agency simultaneously obtained consent decrees in a separate antitrust proceeding that condition major advertising-agency mergers on prohibitions against using NewsGuard's services. The non-obvious dimension is that the alleged suppression did not occur through a direct regulatory order targeting NewsGuard — it occurred through merger approval conditions negotiated with large corporate third parties who had independent counsel and agreed to the terms. NewsGuard contends this amounts to the same unconstitutional government coercion of private actors to silence a disfavored editorial voice, only now packaged inside a judicially approved antitrust settlement.

Why It Matters: This case sits at an unusual intersection of antitrust enforcement, First Amendment press freedom, and administrative law, and the core constitutional question it raises has broad implications: whether the federal government can effectively blacklist a journalistic organization from its market by embedding speech-adjacent conditions inside merger consent decrees, insulating that pressure from First Amendment scrutiny through the procedural form of a negotiated antitrust settlement. The most doctrinally significant move in this filing is the attempt to extend *Vullo*'s jawboning framework to consent decrees negotiated in arms-length antitrust proceedings — a novel application that existing precedent neither clearly supports nor forecloses. If a court ultimately accepts NewsGuard's framing, it could significantly constrain the government's ability to include speech-adjacent conditions in antitrust settlements going forward, affecting how merger review is conducted whenever the target industry touches the flow of information or advertising.

View on CourtListener →
Brief Section 230 First Amendment AI Liability Opposition to Motion for Summary Judgment

St. Clair v. X.AI Holdings Corp.

District Court, S.D. New York · 2026-01-15 · xAI (Grok AI chatbot)

Issue: Whether §230(c)(1) of the Communications Decency Act immunizes an AI holding company (xAI Holdings Corp.) from tort liability arising from sexually explicit images of a real person generated by third-party users through the Grok AI chatbot on the X platform.

Why It Matters: This case presents an early and direct test of whether §230 immunity extends to an AI-powered generative image tool when harmful content is produced by third-party user prompts—a question with significant implications for how courts will treat AI platforms under existing intermediary liability doctrine and whether the "neutral tools" framework articulated in *Herrick v. Grindr* applies to generative AI systems.

View on CourtListener →
First Amendment

Mayday Health v. Jackley

District Court, S.D. New York · 2 filings
2026-01-06 · Other

Why It Matters: The case advances the "jawboning" doctrine by testing the limits of state attorney general authority to use cease-and-desist letters and retaliatory enforcement actions to suppress politically disfavored but constitutionally protected online speech, and it raises a significant question about whether *Younger* abstention can shield such proceedings from federal judicial review when the proceedings are allegedly pretextual.

View on CourtListener →
2026-01-06 · Complaint

Why It Matters: The case tests whether a state attorney general may use a consumer-protection enforcement threat as a mechanism to suppress a noncommercial publisher's truthful speech about out-of-state legal services — squarely implicating *Bigelow v. Virginia*'s protection for cross-border reproductive-health information — while also presenting a notable pleading-stage invocation of § 230(c)(1) as a shield against liability predicated on a website's hyperlinks to third-party content, potentially advancing the question of how § 230 interacts with state regulatory (rather than private civil) actions targeting a platform's linking choices.

View on CourtListener →
Opinion Section 230 First Amendment Appellate Opinion

SNAP, INC. v. THE EIGHTH JUDICIAL DISTRICT COURT OF THE STATE

Nev: Supreme Court · 2026 · Snap, Inc. (Snapchat)

Issue: Whether Section 230 of the Communications Decency Act bars the State of Nevada's claims under the Nevada Deceptive Trade Practices Act (NDTPA), and whether the First Amendment precludes the State's negligence claim against Snapchat.

Why It Matters: This decision represents a significant development in the intersection of Section 230 immunity, First Amendment protection, and state enforcement actions against social media platforms. The court's conclusion that negligence claims can proceed despite First Amendment concerns, while consumer protection claims remain Section 230-barred, suggests courts may be creating new pathways for platform liability through traditional tort theories that avoid Section 230's broad publisher immunity shield—particularly relevant given the Garcia v. Character.AI framework for product liability claims against technology platforms.

View on CourtListener →
AI Liability

DOE v. OPENAI, LP

District Court, District of Columbia · 2 filings
2025-12-30 · Other

Why It Matters: Insufficient text to determine. --- Note: The document submitted contains only page-header metadata (case number, document number, and page citations for all 28 pages of Document 10 in Case 1:25-cv-04564) but no actual text content from the filing. None of the substantive allegations, arguments, rulings, or procedural history are visible in the provided excerpt. A complete and accurate summary cannot be prepared without the underlying text.*

View on CourtListener →
2025-12-30 · Complaint

Why It Matters: The complaint is a pro se filing asserting legally extraordinary claims — including a mathematically derived infringement probability of 10⁻⁴⁵ and the assertion that informal written descriptions of broad AI concepts constitute copyrightable expression sufficient to support trillion-dollar damages — and it is unlikely to survive threshold screening under Rule 12 or the copyright originality standard of *Feist Publications*; however, it illustrates a growing category of pro se litigation attempting to impose intellectual property and RICO liability on AI developers for the architecture of large language models, a question courts have not yet resolved on the merits.

View on CourtListener →
Brief First Amendment AI Liability Complaint

X.AI LLC v. Rob Bonta

District Court, C.D. California · 2025-12-29 · X.AI (xAI Corp., operator of Grok AI system)

Issue: Whether California Assembly Bill 2013's mandatory public disclosure requirements compelling AI developers to reveal training dataset sources, descriptions, and data-point counts violate the First Amendment's prohibition on compelled speech, the Takings Clause's just-compensation requirement, and the void-for-vagueness doctrine as applied to xAI's proprietary generative AI training data.

Why It Matters: This complaint presents a direct First Amendment challenge to a state government's attempt to regulate AI transparency through mandatory disclosure of proprietary training data, potentially setting precedent on whether compelled disclosure regimes targeting AI development methods receive strict or intermediate scrutiny. The case also tests the outer boundary of trade-secret property rights as against state AI accountability legislation, a question no circuit court has yet resolved.

View on CourtListener →
AI Liability

Carreyrou v. Anthropic PBC

District Court, N.D. California · 2 filings
2025-12-22 · Other

Why It Matters: This procedural dispute is an early but consequential test of whether mass AI copyright litigation against industry-wide defendants can proceed in a single forum, with the court's joinder ruling likely to determine whether fair use defenses—particularly the fourth-factor market-harm inquiry, which requires examining the aggregate effect of all defendants' conduct on the licensing market for AI training data—are adjudicated consistently or fragmented across parallel actions. The outcome may signal how courts will structure the wave of generative-AI copyright cases and whether the "industry-wide scheme" theory is sufficient to sustain multi-defendant joinder in AI training-data litigation.

View on CourtListener →
2025-12-22 · Other

Why It Matters: This complaint advances the unsettled question of whether the use of pirated training datasets constitutes willful copyright infringement by LLM developers at each stage of the AI development pipeline, potentially establishing that liability attaches not only at initial download but also at preprocessing, deduplication, and iterative fine-tuning; the plaintiffs' deliberate individual-action strategy, if successful, could foreclose industry efforts to resolve mass AI copyright claims through low-value class settlements.

View on CourtListener →
Brief AI Liability Section 230 First Amendment Complaint

D.W. v. Character Technologies, Inc.

District Court, E.D. Virginia · 2025-12-19 · Character Technologies, Inc. (Character.AI)

Issue: Whether Character Technologies, Inc. bears civil liability — under product liability or related tort theories — for physical or psychological harms allegedly caused to minor users by its Character.AI chatbot system.

Why It Matters: Insufficient text to determine the specific legal theories advanced or the precise harms alleged; however, the filing represents a civil action directly targeting an AI chatbot developer for user harms, which could contribute to the developing body of litigation testing the boundaries of tort and product liability frameworks as applied to conversational AI systems.

View on CourtListener →