Browse Cases

137 results
Clear
First Amendment
Brief AI Liability Section 230 First Amendment Complaint

D.W. v. Character Technologies, Inc.

District Court, E.D. Virginia · 2025-12-19 · Character Technologies, Inc. (Character.AI); Google LLC

Issue: Whether Character Technologies, Inc., its individual founders, and Google LLC are strictly liable under product liability theories of design defect and failure to warn, and liable under negligence, negligence per se, COPPA, and related tort theories, for physical and psychological injuries sustained by an eleven-year-old minor caused by the allegedly defective design of the Character.AI generative AI chatbot product.

Why It Matters: The complaint's explicit framing of a generative AI chatbot as a standalone "product" subject to traditional products liability doctrine — rather than as an interactive computer service shielded by Section 230 — directly advances the unsettled question of whether strict liability design-defect and failure-to-warn claims against AI developers can survive Section 230 and First Amendment challenges, potentially setting precedent on how courts classify AI-generated outputs for tort liability purposes.

View on CourtListener →
Section 230

In re: Roblox Corporation Child Sexual Exploitation and Assault Litigation

District Court, N.D. California · 4 filings
2025-12-12 · Discovery Order

Why It Matters: Roblox is among the largest platforms used by minors, and this MDL will test whether legal theories forged in social-media-addiction cases can survive transplantation into the more demanding context of child sexual exploitation, where FOSTA-SESTA imposes a knowledge-and-benefit standard that operates independently of and in addition to any product-design theory. The discovery fight being constructed here functions as a proxy for the broader merits battle: if Plaintiffs succeed in compelling early production of state-investigation materials before Roblox can litigate its § 230 defenses, they will have established a procedural posture that significantly advantages the litigation going forward. If the court adopts Plaintiffs' framework, it will implicitly answer — at least at the discovery stage — whether FOSTA-SESTA's exception forecloses § 230-based objections from the case's outset, a ruling that could be cited across other CSEA platform litigations nationwide.

View on CourtListener →
2025-12-12 · Other

Why It Matters: The order signals that courts may decline to allow §230 to function as a shield against early discovery in algorithmic-harm litigation, particularly where the claims are framed as product design liability rather than publisher liability for third-party content — a framing with direct relevance to the Roblox proceeding in which this document was filed as an exhibit.

View on CourtListener →
2025-12-12 · Motion to Dismiss

Why It Matters: This MDL consolidates a large volume of child sexual exploitation claims against major platforms and will require the court to rule on the outer boundaries of §230 immunity and First Amendment protection for content moderation in the context of minor-safety harms—an area where circuit courts have generally upheld immunity but public and legislative pressure to narrow it is intense. The court's resolution of whether algorithmic and editorial decisions by platforms constitute protected expression under *Moody*, and whether §230 bars claims framed as product liability or negligent design rather than publisher liability, could significantly shape the litigation landscape for platform child-safety suits nationwide.

View on CourtListener →
First Amendment

AARON v. BONDI

District Court, District of Columbia · 3 filings
2025-12-08 · Motion to Dismiss

Why It Matters: This case sits at the leading edge of post-*Murthy* litigation testing how far the government can pressure private platforms to remove disfavored content before crossing the constitutional line into coercion — and how easily those claims can survive dismissal. The brief forces a resolution of several genuinely unsettled questions: whether *Murthy*'s "dispel the obvious alternative explanation" requirement applies with full force at the Rule 12(b) pleading stage, or whether it is modulated by *Twombly*/*Iqbal*'s plausibility standard when a third party like Apple has offered a facially legitimate competing reason for its own conduct. It also presses the question of whether *Vullo*'s objective-threat standard can be satisfied by a coordinated pattern of public statements and inter-agency signals rather than a single private communication with explicit regulatory teeth. And on retaliation standing, the court's ruling could produce a significant clarifying precedent on whether specifically directed, named-and-targeted government pressure — as distinct from the broadly speculative surveillance risk *Clapper* addressed — can constitute concrete First Amendment injury before any enforcement action is completed.

View on CourtListener →
2025-12-08 · Opposition to Motion to Dismiss

Why It Matters: This case tests whether the government can effectively remove a legal app from circulation by calling a private company and asking — not ordering — it to act, without ever filing a charge or passing a law. The standing fight may prove as consequential as the underlying free speech question: a ruling that plaintiffs cannot trace Apple's decision to the government's conduct would give officials a roadmap for suppressing speech through informal corporate pressure with minimal constitutional accountability. Plaintiffs' procedural-posture argument — that *Murthy* sets an evidentiary ceiling, not a pleading floor — is the brief's most significant doctrinal contribution, and no circuit has yet authoritatively resolved that question. If courts accept it, same-day compliance following explicit demand language may become the template for how future plaintiffs plead jawboning claims in the post-*Murthy* landscape.

View on CourtListener →
2025-12-08 · Motion to Dismiss

Why It Matters: This brief tests whether *Murthy v. Missouri*'s demanding causation framework, developed for a sprawling multi-platform content-moderation pressure apparatus, can be extended to defeat standing in a materially narrower scenario involving a single named app, a single platform, and an identifiable sequence of government contact followed by removal—the kind of granular fact pattern *Murthy* itself suggested was necessary for standing in the first place. Defendants' treatment of Apple's post-hoc public explanation as conclusively defeating a pretext argument at the pleading stage is legally aggressive and, if accepted, would create a significant structural barrier to coercion claims: platforms could insulate government pressure from judicial scrutiny simply by invoking an existing content policy. The brief's retaliation argument, anchored to *Media Matters v. Paxton*, raises the open question of whether an explicit, named, on-record statement of investigative interest by a senior law enforcement official crosses from non-actionable criticism into the individualized targeting recognized in cases Defendants themselves cite—a line the D.C. Circuit has not yet clearly drawn in this context.

View on CourtListener →
Brief Section 230 First Amendment Complaint

Doe S.F. v. Roblox Corporation

District Court, N.D. California · 2025-12-08 · Roblox Corporation

Issue: Whether Roblox Corporation is liable under negligence, products liability, and consumer protection theories for allegedly defective platform design—specifically the absence of age verification, identity screening, and effective parental controls—that enabled an adult predator to groom and sexually exploit a 13-year-old minor user, and whether §230 of the Communications Decency Act bars those claims.

Why It Matters: The case tests whether product-design and failure-to-warn theories targeting a platform's architectural choices—such as self-reported age fields, default open-messaging settings, and the absence of verification tools—can survive §230 immunity by being framed as claims arising from the defendant's own conduct rather than third-party content, a distinction that remains actively contested across circuits and is central to ongoing efforts to impose platform liability for child exploitation harms.

View on CourtListener →
Brief AI Liability Section 230 First Amendment Complaint

The New York Times Company v. Perplexity AI, Inc.

District Court, S.D. New York · 2025-12-05 · Perplexity AI

Issue: Whether Perplexity AI's unauthorized scraping, copying, and redistribution of copyrighted journalistic content through its retrieval-augmented generation (RAG) "answer engine" products constitutes copyright infringement under the Copyright Act, 17 U.S.C. § 101 et seq., and whether Perplexity's attribution of AI-generated "hallucinations" and content with undisclosed omissions to The New York Times constitutes trademark infringement and false designation of origin under the Lanham Act, 15 U.S.C. § 1051 et seq.

Why It Matters: This complaint directly tests whether copyright law's input/output analytical framework applies to RAG-based AI systems — potentially establishing that liability can attach at both the training/indexing stage and the generation stage — and separately advances the question of whether AI hallucinations falsely attributed to a known news brand constitute actionable trademark infringement and false designation of origin under the Lanham Act, a theory with broad implications for AI developer liability in the media context.

View on CourtListener →
Brief AI Liability Section 230 First Amendment Motion to Dismiss

Chicago Tribune Company, LLC v. Perplexity AI, Inc.

District Court, S.D. New York · 2025-12-04 · Perplexity AI

Issue: Whether an AI-powered search and answer platform's alleged reproduction and summarization of news publishers' content without authorization gives rise to claims sounding in deceptive practices or unfair competition under applicable federal or state law.

Why It Matters: Insufficient text to determine the precise precedential impact, as the motion's arguments and the court's ruling (if any) are not included in the document; however, the case is notable as part of emerging litigation testing whether AI systems that ingest and repackage journalism can face civil liability under deceptive practices or unfair competition theories independent of copyright claims.

View on CourtListener →
First Amendment

Riddle v. X Corp

Court of Appeals for the Fifth Circuit · 3 filings
2025-11-18

Why It Matters: The opposition brief signals that §230 and the First Amendment jointly operate as a defense against court-ordered compelled reinstatement of suspended accounts, a position that, if adopted by the Fifth Circuit, would reinforce platform discretion over content moderation decisions even in the context of pending litigation; the brief also illustrates how procedural mechanisms—Rule 8 exhaustion requirements and local emergency motion rules—may serve as threshold barriers preventing appellate courts from reaching the merits of platform-liability disputes.

View on CourtListener →
2025-11-18 · Appellate Opinion

Why It Matters: The brief squarely presents — as an opening brief, without a ruling on the merits — the unresolved question of whether a platform may simultaneously claim § 230's "not-the-speaker" immunity and First Amendment editorial-discretion protection for the same content-moderation act, a tension left open after *Moody v. NetChoice*; a Fifth Circuit ruling on that question would create binding precedent directly governing how platforms plead immunity in content-moderation litigation across the circuit.

View on CourtListener →
2025-11-18 · Appellate Opinion

Why It Matters: If the Fifth Circuit addresses the merits, its ruling on whether §230(c)(1) immunity and First Amendment editorial-discretion protection can be invoked simultaneously for identical content-moderation conduct would create binding circuit precedent directly relevant to platform liability frameworks left open after *Moody v. NetChoice*, 603 U.S. 707 (2024); the court's treatment of the spoliation-mootness question could likewise determine whether Rule 37(e) has any practical force against defendants who complete evidence destruction before a ruling issues.

View on CourtListener →
Brief First Amendment Other

NetChoice v. Jason S. Miyares

District Court, E.D. Virginia · 2025-11-17 · Social media platforms (represented collectively by NetChoice trade association)

Issue: In *NetChoice v. Miyares*, Virginia's Attorney General argues that a federal district court improperly blocked enforcement of Virginia SB 854 — a law imposing default daily time limits on minors' social media use that parents can override — without first performing the application-by-application analysis that the Supreme Court's 2024 decision in *Moody v. NetChoice* requires before a law can be enjoined on its face. The brief also presses two substantive questions: whether SB 854's exclusion of platforms offering news, sports, and entertainment content is a content-neutral functional distinction or a subject-matter carveout that triggers heightened scrutiny, and whether a parental-override time limit survives intermediate scrutiny as a narrowly tailored child-protection measure.

Why It Matters: A wave of near-identical state laws restricting minors' access to social media is simultaneously moving through federal courts in Florida, Texas, and elsewhere, making the procedural and substantive arguments here broadly consequential. If the Fourth Circuit stays the injunction on *Moody* procedural grounds, it will signal to district courts nationwide that facial First Amendment challenges to platform-regulation statutes must clear a significantly higher bar before any injunction issues — a development that would reshape litigation strategy in dozens of pending cases. The content-neutrality argument carries equally high stakes: if a statute that facially names "news, sports, and entertainment" in its definitional exclusions can nonetheless be characterized as a neutral functional distinction, states gain a workable template for drafting minor-protection laws that avoid strict scrutiny. The brief's success or failure will also clarify how far *Free Speech Coalition v. Paxton*'s intermediate-scrutiny reasoning extends beyond age-verification-for-explicit-content contexts into the time-limit-with-parental-override framework Virginia has chosen.

View on CourtListener →
Exhibit First Amendment Other

Meta Platforms, Inc. v. Bonta

District Court, N.D. California · 2025-11-13 · Meta Platforms, Inc.

Issue: Whether social media platform defendants (Meta, TikTok, Snap, and Google/YouTube) are entitled to summary judgment on school districts' negligence, failure-to-warn, and public nuisance claims arising from the platforms' design features and algorithmic systems alleged to cause adolescent addiction and mental health harm.

Why It Matters: The California AG's use of the MDL summary judgment record as evidence in the *Bonta* preliminary injunction proceeding signals that state regulators are actively leveraging private litigation findings to resist platform efforts to enjoin state enforcement, potentially reinforcing the evidentiary foundation for state-level regulation of platform design and youth safety obligations.

View on CourtListener →
Brief First Amendment Section 230 Complaint

Amazon.com Services LLC v. Perplexity AI, Inc.

District Court, N.D. California · 2025-11-04 · Perplexity AI (AI search engine / generative AI platform)

Issue: Insufficient text to determine — the summons identifies Amazon.com Services LLC as plaintiff and Perplexity AI, Inc. as defendant but does not disclose the specific legal claims, statutes, or theories of liability asserted in the underlying complaint.

Why It Matters: Insufficient text to determine — the summons alone reveals only the identity of the parties and the forum, not the legal theories that would bear on platform liability, First Amendment doctrine, or AI regulation.

View on CourtListener →
First Amendment

Computer & Communications Industry Association v. Paxton

District Court, W.D. Texas · 3 filings
Amicus Brief Amicus Brief
2025-10-16 · Other

Why It Matters: The brief advances two arguments worth watching across the broader wave of child online safety litigation. First, the conduct-regulation framing — that age-gating requirements target platform business practices rather than expressive content — is the central legal lever that could determine whether strict scrutiny applies at all; if it succeeds, it substantially lowers the bar for states defending these statutes. Second, the brief surfaces a genuinely open doctrinal question that *Moody v. NetChoice* (2024) has made more acute: whether laws that in practice restrict which apps minors can access implicate platform editorial discretion regardless of how neutrally they are drafted, a tension the brief does not address. The credibility of the "disinterested scholars" posture is also contestable given Thayer's drafting role, and opposing counsel should be expected to press that point in any response.

View on CourtListener →
2025-10-16 · Other

Why It Matters: This amici brief advances a content-neutrality framework specifically designed to distinguish SB 2420 from statutes invalidated in *NetChoice v. Griffin* and *Brown v. Entertainment Merchants Association*, potentially offering courts a doctrinal path to uphold app-store child-safety regulations by classifying gatekeeping and contracting functions as commercial conduct rather than protected editorial discretion — a distinction that, if accepted, could broadly affect the constitutional viability of similar legislation in other states.

View on CourtListener →
2025-10-16 · Other

Why It Matters: This brief illustrates how states are attempting to circumvent First Amendment platform-autonomy challenges by framing minor-protective legislation as commercial contract regulation rather than speech regulation, a theory that—if accepted—could substantially limit the reach of *Moody v. NetChoice* in the context of app store transactions and AI product liability for minors.

View on CourtListener →