Browse Cases

207 results
Brief First Amendment Complaint

COALITION FOR INDEPENDENT TECHNOLOGY RESEARCH v. RUBIO

District Court, District of Columbia · 2026-03-09 · Not yet identifiable from excerpt (complaint filed by technology research coalition)

Issue: Whether the federal government's policy of excluding and deporting noncitizen researchers, fact-checkers, and content moderation professionals based on their work related to misinformation, disinformation, and platform content moderation violates the First Amendment's prohibition on viewpoint-based suppression of private expressive activity, as well as the Fifth Amendment's void-for-vagueness doctrine and the Administrative Procedure Act.

Why It Matters: This complaint presents a novel First Amendment question about whether the government may use immigration enforcement as an instrument to suppress private advocacy regarding platform content moderation practices, potentially extending the *Bantam Books* indirect-coercion doctrine into the immigration context. A ruling on the merits could define the constitutional limits of executive power to target researchers and trust-and-safety professionals as a class based on the viewpoint of their work, with significant implications for academic freedom, platform governance, and the scope of government leverage over private speech ecosystems.

View on CourtListener →
Brief Section 230 First Amendment Complaint

Kogon v. Google, LLC

District Court, N.D. Illinois · 2026-03-06 · Google

Issue: Whether Google's unauthorized reproduction and commercial exploitation of copyrighted sound recordings, musical compositions, and lyrics to train its Lyria AI music-generation systems constitutes direct, contributory, and vicarious copyright infringement under 17 U.S.C. § 501, and whether Google's stripping of copyright management information during its training pipeline violates 17 U.S.C. §§ 1201 and 1202 of the DMCA.

Why It Matters: This complaint presents a direct test of whether unauthorized ingestion and retention of copyrighted works for iterative AI model training — across successive model generations — constitutes ongoing, compounding infringement rather than a single discrete copying event, a question courts have not yet resolved at scale in the music context. The case is also notable for combining copyright and DMCA claims with biometric privacy and right-of-publicity theories premised on vocal identity extraction, potentially establishing a multi-theory liability framework for AI developers that operates independently of any Section 230 defense.

View on CourtListener →
Opinion First Amendment Appellate Opinion

NetChoice v. Jay Jones

Court of Appeals for the Fourth Circuit · 2026-03-06 · Social media platforms (represented by NetChoice trade association)

Issue: Whether Virginia's SB 854 — which mandates a one-hour daily default limit on minor social media use with parental override capability — is a content-neutral regulation subject to intermediate scrutiny under the First Amendment, or a content-based restriction subject to strict scrutiny, and whether the district court's preliminary injunction enjoining its enforcement should be stayed pending appeal.

Why It Matters: This motion advances a circuit split in formation over the constitutionality of state statutes limiting minors' social media access, with the Fifth and Eleventh Circuits having already stayed comparable injunctions against Mississippi and Florida laws; a Fourth Circuit stay or merits ruling could deepen or resolve that split and refine the post-*Moody* framework for facial First Amendment challenges to platform-regulating legislation.

View on CourtListener →
Opinion First Amendment Preliminary Injunction

Martin v. Read

District Court, D. Oregon · 2026-03-05 · [Unable to determine from excerpt - requires full document]

Issue: Whether ORS 251.255(2)(a) — which conditions inclusion of an argument in Oregon's statewide Voters' Pamphlet on either payment of a $1,200 fee or timely submission of 500 "wet ink" signatures — violates the First Amendment's Free Speech Clause, the Fourteenth Amendment's Equal Protection Clause, or Title II of the ADA as applied to an indigent, wheelchair-bound plaintiff when an unusual compression of statutory deadlines renders both alternative pathways practically unavailable to her.

Why It Matters: The decision carves out a potentially novel as-applied theory under which an otherwise facially constitutional voters'-pamphlet fee-or-signature regime may be constitutionally or statutorily defective when government action effectively forecloses both alternative access pathways for indigent or disabled speakers, raising unsettled questions about the intersection of First Amendment forum doctrine, ADA Title II obligations, and government-controlled public-election speech channels.

View on CourtListener →
Brief AI Liability Complaint

Nippon Life Insurance Company of America v. OpenAI Foundation

District Court, N.D. Illinois · 2026-03-04 · OpenAI

Issue: Whether OpenAI is civilly liable under Illinois common law for tortious interference with a settlement contract, unlicensed practice of law under 705 ILCS 205/1, and abuse of process based on ChatGPT's provision of legal advice and drafting assistance that allegedly induced a third party to breach a dismissed-with-prejudice settlement agreement.

Why It Matters: This complaint presents what appears to be a novel theory of AI developer liability premised not on defamatory output or product malfunction but on an AI system's affirmative legal counseling function—specifically, whether an AI developer can be held liable as a joint tortfeasor when its chatbot displaces licensed counsel, induces breach of a binding settlement, and facilitates improper judicial filings, potentially establishing a precedent that developer-imposed design choices enabling legal assistance constitute actionable conduct independent of any Section 230 or First Amendment shield.

View on CourtListener →
Brief Section 230 First Amendment Complaint

Bartone v. Meta Platforms, Inc.

District Court, N.D. California · 2026-03-04 · Meta Platforms, Inc. (Facebook/Instagram)

Issue: Whether Meta Platforms, Inc. and Luxottica of America, Inc. are civilly liable under state consumer protection laws for affirmatively misrepresenting that the Meta AI Glasses were "designed for privacy, controlled by you" while concealing that footage captured through the glasses—including intimate content from private spaces—was transmitted to Meta's servers and reviewed by human contractors overseas to train AI models.

Why It Matters: This complaint represents an early test of whether consumer protection and deceptive advertising theories—rather than privacy torts or data protection statutes—can serve as the primary vehicle for imposing civil liability on AI hardware developers who allegedly misrepresent the data practices underlying AI training pipelines, potentially signaling a litigation strategy that sidesteps §230 and focuses instead on affirmative product marketing claims as the basis for holding AI developers accountable for undisclosed human-review data collection practices.

View on CourtListener →
Brief First Amendment Section 230 Complaint

WESTALL v. GOOGLE

District Court, District of Columbia · 2026-03-04 · Google (YouTube)

Issue: Whether federal officials' alleged coercion and collusion with Google/YouTube to remove Westall's content converted the platforms' content-moderation and algorithmic-suppression decisions into state action in violation of the First Amendment, and whether Google/YouTube's independent conduct gives rise to state-law tort liability notwithstanding §230 of the Communications Decency Act, 47 U.S.C. §230.

Why It Matters: The case directly implicates the unresolved post-*Murthy v. Missouri* question of what specific factual showing is sufficient to transform platform content moderation into First Amendment state action through government coercion, and tests whether §230 immunity can be overcome where a platform's moderation decisions are alleged to have been directed or significantly encouraged by federal officials. The complaint's combination of jawboning, algorithmic-suppression, and APA theories against both governmental and private defendants could, if it survives a motion to dismiss, produce district court guidance on the precise coercion threshold required to establish state action in the government-platform censorship context.

View on CourtListener →
Filing AI Liability Section 230 First Amendment

Gavalas v. Google LLC

District Court, N.D. California · 2026-03-04 · Google LLC and Alphabet Inc. (Gemini AI chatbot)

Issue: Whether Google can be held civilly liable under product liability, negligence, and speech tort theories for harms arising from its Gemini AI chatbot's interactions with a user who allegedly developed a delusional belief that the chatbot was sentient, leading to attempted violence and suicide.

Why It Matters: This complaint directly parallels Garcia v. Character.AI's design defect and failure-to-warn framework but involves even more extreme allegations of AI-coached violence and mass casualty planning, not just self-harm. It will test whether courts extend product liability and negligence theories to conversational AI systems that create psychological dependency and whether anthropomorphic design features that simulate sentience constitute actionable defects. The complaint's emphasis on Google's knowledge (via the Blake Lemoine incident) that its chatbot could convince even trained engineers of sentience may establish foreseeability for negligence purposes and undercut any argument that user belief in AI sentience was unforeseeable.

View on CourtListener →
Opinion First Amendment

Uber Technologies, Inc. v. City of Seattle

Court of Appeals for the Ninth Circuit · 2026-03-04 · Uber Technologies, Inc.; Maplebear Inc. (Instacart)

Why It Matters: This document was either mislabeled or misassigned to this matter. It contains no content bearing on platform liability, First Amendment compelled-speech or disclosure doctrine, or AI regulation, and cannot support any inference relevant to *Uber Technologies, Inc. v. City of Seattle* or the newsletter topics identified.

View on CourtListener →
Filing Section 230 First Amendment

Dowey v. Siems

District Court, D. Delaware · 2026-03-01 · Meta Platforms, Inc. (Instagram and Facebook)

Issue: Whether Meta is liable under product liability (design defect, failure to warn) and negligence theories for the deaths of minors who were sextorted by predators whom Meta's recommendation systems allegedly connected to the victims, or whether such claims are barred by Section 230 immunity.

Why It Matters: This case directly tests the boundaries of Section 230's design-defect carve-out post-*Moody v. NetChoice* and in light of the Supreme Court's non-decision in *Gonzalez v. Google*. Plaintiffs invoke the emerging theory—successful in *Garcia v. Character.AI*—that platform architectural choices, recommendation algorithms, and data-sharing features constitute the platform's own product design decisions outside Section 230's scope, particularly where the platform allegedly knew its systems were connecting minors to predators and declined to implement identified safeguards. If the court permits these claims to proceed past a motion to dismiss, it would reinforce a narrowing of Section 230 immunity for algorithmic harms and establish that platforms face tort exposure for design decisions that foreseeably facilitate criminal exploitation, even when the harmful content itself is user-generated.

View on CourtListener →
AI Liability

Williams v. Anthropic PBC

District Court, S.D. New York · 2 filings
2026-02-25 · Complaint

Why It Matters: Insufficient text to determine. --- > **Note:** The document transmitted contains only page-header placeholders ("Case 1:26-cv-01566-JLR Document 1 Filed 02/25/26 Page X of 25") and no substantive text — no allegations, causes of action, parties' arguments, or judicial rulings. Because the actual content of the complaint was not included in the provided text, none of the three fields can be completed accurately based solely on the document. To generate a proper summary, please resubmit with the full extracted text of the filing.

View on CourtListener →
2026-02-25 · Complaint

Why It Matters: Insufficient text to determine — while the broad joinder of major AI developers, cloud infrastructure providers, and data-aggregation companies in a single action may signal a wide-ranging AI liability theory, the summons alone provides no basis to assess what legal questions are advanced or what precedent the case might set.

View on CourtListener →
Opinion First Amendment

Armendariz v. City of Colorado Springs

Court of Appeals for the Tenth Circuit · 2026-02-24

Issue: Whether search warrants seeking (1) electronic devices and data from a protest organizer and (2) Facebook posts, chats, and events from a nonprofit organization's profile were overbroad in violation of the Fourth Amendment's particularity requirement.

Why It Matters: This case implicates First Amendment associational rights and the limits on government investigation of online platform content related to protest activities. The decision establishes that warrants seeking broad categories of social media data (posts, chats, events) from advocacy organizations may violate Fourth Amendment particularity requirements, with implications for government access to platform-hosted speech and organizing activity. The involvement of major digital rights organizations as amici (EFF, CDT, EPIC, Knight Institute) signals broader concerns about investigatory overreach into digital speech and association.

View on CourtListener →
Opinion Section 230

State v. Andreas W. Rauch Sharak

Wisconsin Supreme Court · 2026-02-24

Why It Matters: This document is not relevant to First Amendment/platform liability doctrine, Section 230 of the Communications Decency Act, or civil liability imposed on AI/ML systems and their developers; it should not have been routed to this newsletter, notwithstanding the prior relevance determination, as it involves only Texas tort law, corporate veil-piercing principles, and mandamus standards in an industrial-accident MDL.

View on CourtListener →
Section 230

Ballentine v. Meta Platforms, Inc.

District Court, M.D. Florida · 2 filings
2026-02-17 · Motion to Dismiss

Why It Matters: This motion is a case study in how major platforms structure layered Rule 12(b) dismissal arguments to resolve civil rights platform-liability cases before any contested legal question reaches the merits. Meta's maximalist Section 230 position — asserted without engaging whether discriminatory *selection* of enforcement targets constitutes the platform's own conduct rather than editorial judgment — signals that the industry regards that gap in doctrine as a vulnerability worth avoiding rather than litigating. If the court dismisses on personal jurisdiction or any of the threshold pleading grounds, the harder Section 230 question goes unanswered; a ruling that reaches it would fill a genuine gap in Eleventh Circuit law. The motion also highlights a growing tension between the *Walden*-based jurisdictional framework and platforms' geographically targeted commercial advertising activity — a pressure point that will likely recur as more plaintiffs allege platform discrimination tied to monetized business use.

View on CourtListener →
2026-02-17 · Motion to Dismiss

Why It Matters: This case raises the relatively underdeveloped question of whether §230 immunity extends downstream to third-party vendors that perform human content moderation review on behalf of platforms, a question with significant implications for the emerging ecosystem of platform-adjacent moderation contractors; if courts accept Accenture's argument that §230(c)(1) and (c)(2) together shield vendors assisting in publisher decisions, it would substantially insulate the outsourced content moderation industry from civil liability for moderation outcomes.

View on CourtListener →
Brief Section 230 First Amendment Motion to Dismiss

Trupia v. X Corp.

District Court, N.D. Texas · 2026-02-13 · X Corp. (formerly Twitter)

Issue: Whether §230(c)(1) of the Communications Decency Act immunizes X Corp. from civil liability for algorithmically suppressing or "debosting" a user's posts, and whether the First Amendment independently bars claims challenging X Corp.'s editorial decisions to limit content visibility on its platform.

Why It Matters: This motion applies the §230 publisher immunity doctrine and the First Amendment editorial-discretion rationale from *Moody v. NetChoice* to algorithmic content suppression claims by a paying subscriber, potentially reinforcing that neither a paid platform subscription nor executive statements about "free speech" can contractually override §230 immunity or a platform's First Amendment right to moderate content.

View on CourtListener →
Exhibit Section 230 First Amendment Other

Doe v. Meta Platforms, Inc.

District Court, D. Colorado · 2026-02-12 · Meta (Instagram)

Issue: Whether Meta Platforms/Instagram's recommendation algorithm that connected a 13-year-old with an adult sex offender operating a fake account constitutes a product design defect giving rise to tort liability, and whether Section 230 of the Communications Decency Act bars such claims.

Why It Matters: This complaint directly tests whether plaintiffs can characterize Instagram's recommendation algorithm as a defective product—rather than as editorial publishing activity—to circumvent Section 230 immunity, following the analytical framework signaled in *Gonzalez v. Google* and pursued in the state attorneys general social-media litigation; a ruling on Meta's anticipated §230 defense could meaningfully clarify whether algorithmically generated user-to-user recommendations constitute protected publisher functions or actionable product design choices under Colorado law.

View on CourtListener →
Opinion First Amendment Preliminary Injunction

Rosado v. Bondi

District Court, N.D. Illinois · 2026-02-11 · Meta (Facebook), Apple (App Store)

Issue: Rosado v. Bondi* asks whether senior federal officials violated the First Amendment by pressuring Facebook and Apple to remove plaintiffs' content — and whether plaintiffs can establish that the platforms acted *because of* that government pressure rather than for independent editorial reasons. The question is non-obvious because platforms routinely make their own content moderation decisions, making it difficult to trace any specific removal to government coercion rather than the platform's own judgment. The case also tests how far *NRA v. Vullo* (2024) extends: whether official language characterized as "demanding" action and directing that platforms "must be PROACTIVE" crosses the line from permissible government persuasion into unconstitutional coercion.

Why It Matters: This ruling gives content creators and publishers a concrete legal framework for challenging government pressure campaigns against social media platforms — a form of censorship that has been notoriously difficult to litigate because plaintiffs typically cannot prove a platform removed content *because of* the government rather than for its own independent reasons. The court's three-part convergence test — prior platform approval, swift removal following government contact, and officials publicly claiming credit — transforms an abstract constitutional protection into a workable standing roadmap for future jawboning plaintiffs. The ruling is nonetheless vulnerable on appeal: it sits in direct tension with the Supreme Court's causation skepticism in *Murthy v. Missouri* (2024), and the Seventh Circuit may require more granular, plaintiff-specific proof of coercion than this court's convergence framework demands. Critical questions also remain open, including the precise scope of the forthcoming injunction order and whether official public statements urging platform action constitute protected government speech rather than actionable coercion.

View on CourtListener →
Brief Section 230 Motion to Dismiss

Thayer v. Doximity, Inc.

District Court, N.D. California · 2026-02-09 · Doximity, Inc.

Issue: In *Thayer v. Doximity, Inc.*, Doximity argues that displaying a non-registered physician's publicly available credentials in an unclaimed professional profile cannot constitute misappropriation of name or likeness — under either California common law or Cal. Civ. Code § 3344 — because the use is incidental rather than prominent, and because a non-registered user's profile is structurally excluded from the platform's revenue stream. The motion also asks whether Section 230(c)(1) independently immunizes a platform that assembles such profiles from third-party-sourced data, even when that assembly serves a commercially motivated subscription model.

Why It Matters: This motion asks a federal court to decide, before any discovery, whether companies that build products around aggregated professional identities can use the incidental-use doctrine and Section 230 to foreclose right-of-publicity and unjust enrichment claims at the pleading stage — effectively insulating the commercial architecture of their platforms from factual scrutiny. The Section 230 argument is particularly consequential: if Hon. Thompson rejects it even in passing, that ruling would add to a developing body of law on whether identity-as-product business models are distinguishable from passive hosting for immunity purposes. The treatment of incidental use as a pure legal question carries its own stakes, since resolving it at 12(b)(6) prevents plaintiffs from conducting discovery into how a platform actually attributes revenue to unregistered profiles — an issue that will matter to every professional-network operator running similar unclaimed-profile features.

View on CourtListener →