Browse Cases

207 results
Section 230

In re: Roblox Corporation Child Sexual Exploitation and Assault Litigation

District Court, N.D. California · 2 filings
2025-09-18 · Other

Why It Matters: This filing tests whether the JPML will treat a platform's alleged safety-design failures as an outcome-agnostic consolidation anchor — a theory that, if accepted, could draw a broader category of technology-facilitated child harm cases into MDL proceedings that were constituted around sexual exploitation specifically. The brief's most contested move is its dismissal of Section 230 differentiation: the FOSTA-SESTA carve-out from § 230 immunity is available to most MDL No. 3166 plaintiffs but categorically inapplicable to Seitz, meaning the § 230 pretrial framework already developed in the MDL may not translate cleanly to her claims. If the Panel credits Defendants' taxonomy — distinguishing sexual exploitation from violent or extremist content facilitation — it could signal a meaningful limit on how broadly platform-identity can unify factually adjacent but legally divergent cases within a single MDL proceeding.

View on CourtListener →
First Amendment

AARON v. BONDI

District Court, District of Columbia · 3 filings
2025-12-08 · Motion to Dismiss

Why It Matters: This case sits at the leading edge of post-*Murthy* litigation testing how far the government can pressure private platforms to remove disfavored content before crossing the constitutional line into coercion — and how easily those claims can survive dismissal. The brief forces a resolution of several genuinely unsettled questions: whether *Murthy*'s "dispel the obvious alternative explanation" requirement applies with full force at the Rule 12(b) pleading stage, or whether it is modulated by *Twombly*/*Iqbal*'s plausibility standard when a third party like Apple has offered a facially legitimate competing reason for its own conduct. It also presses the question of whether *Vullo*'s objective-threat standard can be satisfied by a coordinated pattern of public statements and inter-agency signals rather than a single private communication with explicit regulatory teeth. And on retaliation standing, the court's ruling could produce a significant clarifying precedent on whether specifically directed, named-and-targeted government pressure — as distinct from the broadly speculative surveillance risk *Clapper* addressed — can constitute concrete First Amendment injury before any enforcement action is completed.

View on CourtListener →
2025-12-08 · Opposition to Motion to Dismiss

Why It Matters: This case tests whether the government can effectively remove a legal app from circulation by calling a private company and asking — not ordering — it to act, without ever filing a charge or passing a law. The standing fight may prove as consequential as the underlying free speech question: a ruling that plaintiffs cannot trace Apple's decision to the government's conduct would give officials a roadmap for suppressing speech through informal corporate pressure with minimal constitutional accountability. Plaintiffs' procedural-posture argument — that *Murthy* sets an evidentiary ceiling, not a pleading floor — is the brief's most significant doctrinal contribution, and no circuit has yet authoritatively resolved that question. If courts accept it, same-day compliance following explicit demand language may become the template for how future plaintiffs plead jawboning claims in the post-*Murthy* landscape.

View on CourtListener →
2025-12-08 · Motion to Dismiss

Why It Matters: This brief tests whether *Murthy v. Missouri*'s demanding causation framework, developed for a sprawling multi-platform content-moderation pressure apparatus, can be extended to defeat standing in a materially narrower scenario involving a single named app, a single platform, and an identifiable sequence of government contact followed by removal—the kind of granular fact pattern *Murthy* itself suggested was necessary for standing in the first place. Defendants' treatment of Apple's post-hoc public explanation as conclusively defeating a pretext argument at the pleading stage is legally aggressive and, if accepted, would create a significant structural barrier to coercion claims: platforms could insulate government pressure from judicial scrutiny simply by invoking an existing content policy. The brief's retaliation argument, anchored to *Media Matters v. Paxton*, raises the open question of whether an explicit, named, on-record statement of investigative interest by a senior law enforcement official crosses from non-actionable criticism into the individualized targeting recognized in cases Defendants themselves cite—a line the D.C. Circuit has not yet clearly drawn in this context.

View on CourtListener →
Brief Section 230 First Amendment Complaint

Doe S.F. v. Roblox Corporation

District Court, N.D. California · 2025-12-08 · Roblox Corporation

Issue: Whether Roblox Corporation is liable under negligence, products liability, and consumer protection theories for allegedly defective platform design—specifically the absence of age verification, identity screening, and effective parental controls—that enabled an adult predator to groom and sexually exploit a 13-year-old minor user, and whether §230 of the Communications Decency Act bars those claims.

Why It Matters: The case tests whether product-design and failure-to-warn theories targeting a platform's architectural choices—such as self-reported age fields, default open-messaging settings, and the absence of verification tools—can survive §230 immunity by being framed as claims arising from the defendant's own conduct rather than third-party content, a distinction that remains actively contested across circuits and is central to ongoing efforts to impose platform liability for child exploitation harms.

View on CourtListener →
Brief AI Liability Section 230 First Amendment Complaint

The New York Times Company v. Perplexity AI, Inc.

District Court, S.D. New York · 2025-12-05 · Perplexity AI

Issue: Whether Perplexity AI's unauthorized scraping, copying, and redistribution of copyrighted journalistic content through its retrieval-augmented generation (RAG) "answer engine" products constitutes copyright infringement under the Copyright Act, 17 U.S.C. § 101 et seq., and whether Perplexity's attribution of AI-generated "hallucinations" and content with undisclosed omissions to The New York Times constitutes trademark infringement and false designation of origin under the Lanham Act, 15 U.S.C. § 1051 et seq.

Why It Matters: This complaint directly tests whether copyright law's input/output analytical framework applies to RAG-based AI systems — potentially establishing that liability can attach at both the training/indexing stage and the generation stage — and separately advances the question of whether AI hallucinations falsely attributed to a known news brand constitute actionable trademark infringement and false designation of origin under the Lanham Act, a theory with broad implications for AI developer liability in the media context.

View on CourtListener →
Brief AI Liability Section 230 First Amendment Motion to Dismiss

Chicago Tribune Company, LLC v. Perplexity AI, Inc.

District Court, S.D. New York · 2025-12-04 · Perplexity AI

Issue: Whether an AI-powered search and answer platform's alleged reproduction and summarization of news publishers' content without authorization gives rise to claims sounding in deceptive practices or unfair competition under applicable federal or state law.

Why It Matters: Insufficient text to determine the precise precedential impact, as the motion's arguments and the court's ruling (if any) are not included in the document; however, the case is notable as part of emerging litigation testing whether AI systems that ingest and repackage journalism can face civil liability under deceptive practices or unfair competition theories independent of copyright claims.

View on CourtListener →
First Amendment

Riddle v. X Corp

Court of Appeals for the Fifth Circuit · 3 filings
2025-11-18

Why It Matters: The opposition brief signals that §230 and the First Amendment jointly operate as a defense against court-ordered compelled reinstatement of suspended accounts, a position that, if adopted by the Fifth Circuit, would reinforce platform discretion over content moderation decisions even in the context of pending litigation; the brief also illustrates how procedural mechanisms—Rule 8 exhaustion requirements and local emergency motion rules—may serve as threshold barriers preventing appellate courts from reaching the merits of platform-liability disputes.

View on CourtListener →
2025-11-18 · Appellate Opinion

Why It Matters: The brief squarely presents — as an opening brief, without a ruling on the merits — the unresolved question of whether a platform may simultaneously claim § 230's "not-the-speaker" immunity and First Amendment editorial-discretion protection for the same content-moderation act, a tension left open after *Moody v. NetChoice*; a Fifth Circuit ruling on that question would create binding precedent directly governing how platforms plead immunity in content-moderation litigation across the circuit.

View on CourtListener →
2025-11-18 · Appellate Opinion

Why It Matters: If the Fifth Circuit addresses the merits, its ruling on whether §230(c)(1) immunity and First Amendment editorial-discretion protection can be invoked simultaneously for identical content-moderation conduct would create binding circuit precedent directly relevant to platform liability frameworks left open after *Moody v. NetChoice*, 603 U.S. 707 (2024); the court's treatment of the spoliation-mootness question could likewise determine whether Rule 37(e) has any practical force against defendants who complete evidence destruction before a ruling issues.

View on CourtListener →
Brief First Amendment Other

NetChoice v. Jason S. Miyares

District Court, E.D. Virginia · 2025-11-17 · Social media platforms (represented collectively by NetChoice trade association)

Issue: In *NetChoice v. Miyares*, Virginia's Attorney General argues that a federal district court improperly blocked enforcement of Virginia SB 854 — a law imposing default daily time limits on minors' social media use that parents can override — without first performing the application-by-application analysis that the Supreme Court's 2024 decision in *Moody v. NetChoice* requires before a law can be enjoined on its face. The brief also presses two substantive questions: whether SB 854's exclusion of platforms offering news, sports, and entertainment content is a content-neutral functional distinction or a subject-matter carveout that triggers heightened scrutiny, and whether a parental-override time limit survives intermediate scrutiny as a narrowly tailored child-protection measure.

Why It Matters: A wave of near-identical state laws restricting minors' access to social media is simultaneously moving through federal courts in Florida, Texas, and elsewhere, making the procedural and substantive arguments here broadly consequential. If the Fourth Circuit stays the injunction on *Moody* procedural grounds, it will signal to district courts nationwide that facial First Amendment challenges to platform-regulation statutes must clear a significantly higher bar before any injunction issues — a development that would reshape litigation strategy in dozens of pending cases. The content-neutrality argument carries equally high stakes: if a statute that facially names "news, sports, and entertainment" in its definitional exclusions can nonetheless be characterized as a neutral functional distinction, states gain a workable template for drafting minor-protection laws that avoid strict scrutiny. The brief's success or failure will also clarify how far *Free Speech Coalition v. Paxton*'s intermediate-scrutiny reasoning extends beyond age-verification-for-explicit-content contexts into the time-limit-with-parental-override framework Virginia has chosen.

View on CourtListener →
Other Filing Section 230 Other

Doe v. X Corp.

District Court, N.D. Texas · 2025-11-14 · X Corp. (Twitter)

Issue: Whether the "produced by force, fraud, misrepresentation, or coercion" exception to 15 U.S.C. § 6851(b)(4)(A)'s commercial-pornography exclusion encompasses a third party's unauthorized copying and reposting of consensually created commercial pornographic content—thereby imposing liability on X Corp. and xAI Corp. for hosting and using that content—and whether § 230(c)(1) independently bars such claims.

Why It Matters: This decision establishes that platforms sharing user-uploaded content with AI training systems do not face liability under the federal NCII statute for third-party-posted commercial pornography, and it reinforces a narrow reading of § 230's intellectual property exception that preserves broad platform immunity for privacy-based tort claims—potentially shielding AI developers like xAI from statutory damages when they receive content from platform partners rather than directly from tortious actors.

View on CourtListener →
Exhibit First Amendment Other

Meta Platforms, Inc. v. Bonta

District Court, N.D. California · 2025-11-13 · Meta Platforms, Inc.

Issue: Whether social media platform defendants (Meta, TikTok, Snap, and Google/YouTube) are entitled to summary judgment on school districts' negligence, failure-to-warn, and public nuisance claims arising from the platforms' design features and algorithmic systems alleged to cause adolescent addiction and mental health harm.

Why It Matters: The California AG's use of the MDL summary judgment record as evidence in the *Bonta* preliminary injunction proceeding signals that state regulators are actively leveraging private litigation findings to resist platform efforts to enjoin state enforcement, potentially reinforcing the evidentiary foundation for state-level regulation of platform design and youth safety obligations.

View on CourtListener →
Brief First Amendment Section 230 Complaint

Amazon.com Services LLC v. Perplexity AI, Inc.

District Court, N.D. California · 2025-11-04 · Perplexity AI (AI search engine / generative AI platform)

Issue: Insufficient text to determine — the summons identifies Amazon.com Services LLC as plaintiff and Perplexity AI, Inc. as defendant but does not disclose the specific legal claims, statutes, or theories of liability asserted in the underlying complaint.

Why It Matters: Insufficient text to determine — the summons alone reveals only the identity of the parties and the forum, not the legal theories that would bear on platform liability, First Amendment doctrine, or AI regulation.

View on CourtListener →
First Amendment

Computer & Communications Industry Association v. Paxton

District Court, W.D. Texas · 3 filings
Amicus Brief Amicus Brief
2025-10-16 · Other

Why It Matters: The brief advances two arguments worth watching across the broader wave of child online safety litigation. First, the conduct-regulation framing — that age-gating requirements target platform business practices rather than expressive content — is the central legal lever that could determine whether strict scrutiny applies at all; if it succeeds, it substantially lowers the bar for states defending these statutes. Second, the brief surfaces a genuinely open doctrinal question that *Moody v. NetChoice* (2024) has made more acute: whether laws that in practice restrict which apps minors can access implicate platform editorial discretion regardless of how neutrally they are drafted, a tension the brief does not address. The credibility of the "disinterested scholars" posture is also contestable given Thayer's drafting role, and opposing counsel should be expected to press that point in any response.

View on CourtListener →
2025-10-16 · Other

Why It Matters: This amici brief advances a content-neutrality framework specifically designed to distinguish SB 2420 from statutes invalidated in *NetChoice v. Griffin* and *Brown v. Entertainment Merchants Association*, potentially offering courts a doctrinal path to uphold app-store child-safety regulations by classifying gatekeeping and contracting functions as commercial conduct rather than protected editorial discretion — a distinction that, if accepted, could broadly affect the constitutional viability of similar legislation in other states.

View on CourtListener →
2025-10-16 · Other

Why It Matters: This brief illustrates how states are attempting to circumvent First Amendment platform-autonomy challenges by framing minor-protective legislation as commercial contract regulation rather than speech regulation, a theory that—if accepted—could substantially limit the reach of *Moody v. NetChoice* in the context of app store transactions and AI product liability for minors.

View on CourtListener →
Brief AI Liability Section 230 First Amendment Complaint

D.A v. Roblox Corporation

District Court, N.D. California · 2025-10-16 · Roblox Corporation

Issue: Insufficient text to determine.

Why It Matters: Insufficient text to determine. --- Note: The document transmitted consists solely of 109 repeated docket-page citations with no substantive content rendered. To generate an accurate summary, please resubmit with the actual text of the complaint.*

View on CourtListener →
Brief Section 230 First Amendment Complaint

Doe v. Roblox Corporation

District Court, E.D. Arkansas · 2025-10-07 · Roblox Corporation

Issue: Whether Roblox Corporation and Discord, Inc. are liable under product liability (design defect), negligence, and fraud theories for injuries a minor suffered from sexual exploitation facilitated through their platforms, and whether those claims are barred by §230(c)(1) of the Communications Decency Act.

Why It Matters: This complaint presents a direct test of whether product liability and fraud theories premised on platform design choices — rather than on Defendants' role as publishers of third-party content — can survive anticipated §230 preemption arguments, potentially advancing the circuit split over whether design-defect claims targeting a platform's own architectural decisions fall outside §230's immunity.

View on CourtListener →