Browse Cases
142 resultsNetchoice v. Wilson
Issue: Whether the South Carolina Age-Appropriate Code Design Act's requirements that covered online services exercise "reasonable care" to prevent harms to minors, disable certain engagement and discovery features, screen third-party advertising, and submit to third-party audits violate the First Amendment's prohibitions on content-based speech restrictions and compelled speech, are preempted by §230(c)(1) of the Communications Decency Act and COPPA, and violate the Commerce Clause and Due Process Clause.
Why It Matters: This complaint extends a growing line of coordinated First Amendment challenges by NetChoice to state-level online minor-protection laws, directly invoking *Moody v. NetChoice* and Fourth Circuit precedent to argue that platform curation and algorithmic editorial judgment are categorically protected expression, which, if adopted by the court, would significantly constrain states' ability to regulate platform design features affecting speech.
View on CourtListener →St. Clair v. X.AI Holdings Corp.
Why It Matters: This case presents an early and direct test of whether §230 immunity extends to an AI-powered generative image tool when harmful content is produced by third-party user prompts—a question with significant implications for how courts will treat AI platforms under existing intermediary liability doctrine and whether the "neutral tools" framework articulated in *Herrick v. Grindr* applies to generative AI systems.
View on CourtListener →Why It Matters: This motion directly tests whether Section 230 immunity extends to content affirmatively generated by an AI system — as opposed to merely hosted third-party content — a question with broad implications for AI developer liability; if the court accepts plaintiff's framing that AI-generated output constitutes the developer's own content, it could establish a significant precedent foreclosing Section 230 as a defense for generative AI systems and accelerating civil liability exposure for AI developers under existing tort and statutory frameworks.
View on CourtListener →Welkin v. Meta Platforms, Inc.
Issue: Whether §230(c) of the Communications Decency Act immunizes Meta from an IIED claim and request for injunctive relief arising from Meta's alleged failure to remove a third-party Facebook impersonation profile whose content Iranian authorities reportedly used as evidence in criminal proceedings against the plaintiff's mother.
Why It Matters: The motion squarely tests whether §230(c) shields a platform from tort liability and injunctive relief when a plaintiff alleges harm flowing not from the platform's affirmative conduct but from its editorial decision to only partially remove third-party content flagged as an impersonation account, potentially reinforcing the breadth of publisher immunity for content-moderation decisions short of complete removal.
View on CourtListener →SNAP, INC. v. THE EIGHTH JUDICIAL DISTRICT COURT OF THE STATE
Issue: Whether Section 230 of the Communications Decency Act bars the State of Nevada's claims under the Nevada Deceptive Trade Practices Act (NDTPA), and whether the First Amendment precludes the State's negligence claim against Snapchat.
Why It Matters: This decision represents a significant development in the intersection of Section 230 immunity, First Amendment protection, and state enforcement actions against social media platforms. The court's conclusion that negligence claims can proceed despite First Amendment concerns, while consumer protection claims remain Section 230-barred, suggests courts may be creating new pathways for platform liability through traditional tort theories that avoid Section 230's broad publisher immunity shield—particularly relevant given the Garcia v. Character.AI framework for product liability claims against technology platforms.
View on CourtListener →DOE v. OPENAI, LP
Why It Matters: Insufficient text to determine. --- Note: The document submitted contains only page-header metadata (case number, document number, and page citations for all 28 pages of Document 10 in Case 1:25-cv-04564) but no actual text content from the filing. None of the substantive allegations, arguments, rulings, or procedural history are visible in the provided excerpt. A complete and accurate summary cannot be prepared without the underlying text.*
View on CourtListener →Why It Matters: The complaint is a pro se filing asserting legally extraordinary claims — including a mathematically derived infringement probability of 10⁻⁴⁵ and the assertion that informal written descriptions of broad AI concepts constitute copyrightable expression sufficient to support trillion-dollar damages — and it is unlikely to survive threshold screening under Rule 12 or the copyright originality standard of *Feist Publications*; however, it illustrates a growing category of pro se litigation attempting to impose intellectual property and RICO liability on AI developers for the architecture of large language models, a question courts have not yet resolved on the merits.
View on CourtListener →Carreyrou v. Anthropic PBC
Issue: Whether Anthropic, Google, Meta, xAI, Perplexity, Apple, NVIDIA, and OpenAI are liable under the Copyright Act for willful infringement by downloading plaintiffs' copyrighted books from shadow libraries (including LibGen, Z-Library, Anna's Archive, and The Pile/Books3) and reproducing those works during LLM training, preprocessing, and fine-tuning without license or permission.
Why It Matters: This complaint advances the unsettled question of whether the use of pirated training datasets constitutes willful copyright infringement by LLM developers at each stage of the AI development pipeline, potentially establishing that liability attaches not only at initial download but also at preprocessing, deduplication, and iterative fine-tuning; the plaintiffs' deliberate individual-action strategy, if successful, could foreclose industry efforts to resolve mass AI copyright claims through low-value class settlements.
View on CourtListener →D.W. v. Character Technologies, Inc.
Why It Matters: Insufficient text to determine the specific legal theories advanced or the precise harms alleged; however, the filing represents a civil action directly targeting an AI chatbot developer for user harms, which could contribute to the developing body of litigation testing the boundaries of tort and product liability frameworks as applied to conversational AI systems.
View on CourtListener →Why It Matters: The complaint's explicit framing of a generative AI chatbot as a standalone "product" subject to traditional products liability doctrine — rather than as an interactive computer service shielded by Section 230 — directly advances the unsettled question of whether strict liability design-defect and failure-to-warn claims against AI developers can survive Section 230 and First Amendment challenges, potentially setting precedent on how courts classify AI-generated outputs for tort liability purposes.
View on CourtListener →Why It Matters: Roblox is among the largest platforms used by minors, and this MDL will test whether legal theories forged in social-media-addiction cases can survive transplantation into the more demanding context of child sexual exploitation, where FOSTA-SESTA imposes a knowledge-and-benefit standard that operates independently of and in addition to any product-design theory. The discovery fight being constructed here functions as a proxy for the broader merits battle: if Plaintiffs succeed in compelling early production of state-investigation materials before Roblox can litigate its § 230 defenses, they will have established a procedural posture that significantly advantages the litigation going forward. If the court adopts Plaintiffs' framework, it will implicitly answer — at least at the discovery stage — whether FOSTA-SESTA's exception forecloses § 230-based objections from the case's outset, a ruling that could be cited across other CSEA platform litigations nationwide.
View on CourtListener →Why It Matters: The order signals that courts may decline to allow §230 to function as a shield against early discovery in algorithmic-harm litigation, particularly where the claims are framed as product design liability rather than publisher liability for third-party content — a framing with direct relevance to the Roblox proceeding in which this document was filed as an exhibit.
View on CourtListener →Why It Matters: This MDL consolidates a large volume of child sexual exploitation claims against major platforms and will require the court to rule on the outer boundaries of §230 immunity and First Amendment protection for content moderation in the context of minor-safety harms—an area where circuit courts have generally upheld immunity but public and legislative pressure to narrow it is intense. The court's resolution of whether algorithmic and editorial decisions by platforms constitute protected expression under *Moody*, and whether §230 bars claims framed as product liability or negligent design rather than publisher liability, could significantly shape the litigation landscape for platform child-safety suits nationwide.
View on CourtListener →Doe S.F. v. Roblox Corporation
Issue: Whether Roblox Corporation is liable under negligence, products liability, and consumer protection theories for allegedly defective platform design—specifically the absence of age verification, identity screening, and effective parental controls—that enabled an adult predator to groom and sexually exploit a 13-year-old minor user, and whether §230 of the Communications Decency Act bars those claims.
Why It Matters: The case tests whether product-design and failure-to-warn theories targeting a platform's architectural choices—such as self-reported age fields, default open-messaging settings, and the absence of verification tools—can survive §230 immunity by being framed as claims arising from the defendant's own conduct rather than third-party content, a distinction that remains actively contested across circuits and is central to ongoing efforts to impose platform liability for child exploitation harms.
View on CourtListener →The New York Times Company v. Perplexity AI, Inc.
Issue: Whether Perplexity AI's unauthorized scraping, copying, and redistribution of copyrighted journalistic content through its retrieval-augmented generation (RAG) "answer engine" products constitutes copyright infringement under the Copyright Act, 17 U.S.C. § 101 et seq., and whether Perplexity's attribution of AI-generated "hallucinations" and content with undisclosed omissions to The New York Times constitutes trademark infringement and false designation of origin under the Lanham Act, 15 U.S.C. § 1051 et seq.
Why It Matters: This complaint directly tests whether copyright law's input/output analytical framework applies to RAG-based AI systems — potentially establishing that liability can attach at both the training/indexing stage and the generation stage — and separately advances the question of whether AI hallucinations falsely attributed to a known news brand constitute actionable trademark infringement and false designation of origin under the Lanham Act, a theory with broad implications for AI developer liability in the media context.
View on CourtListener →Chicago Tribune Company, LLC v. Perplexity AI, Inc.
Issue: Whether an AI-powered search and answer platform's alleged reproduction and summarization of news publishers' content without authorization gives rise to claims sounding in deceptive practices or unfair competition under applicable federal or state law.
Why It Matters: Insufficient text to determine the precise precedential impact, as the motion's arguments and the court's ruling (if any) are not included in the document; however, the case is notable as part of emerging litigation testing whether AI systems that ingest and repackage journalism can face civil liability under deceptive practices or unfair competition theories independent of copyright claims.
View on CourtListener →Riddle v. X Corp
Why It Matters: The brief squarely presents — as an opening brief, without a ruling on the merits — the unresolved question of whether a platform may simultaneously claim § 230's "not-the-speaker" immunity and First Amendment editorial-discretion protection for the same content-moderation act, a tension left open after *Moody v. NetChoice*; a Fifth Circuit ruling on that question would create binding precedent directly governing how platforms plead immunity in content-moderation litigation across the circuit.
View on CourtListener →Why It Matters: If the Fifth Circuit addresses the merits, its ruling on whether §230(c)(1) immunity and First Amendment editorial-discretion protection can be invoked simultaneously for identical content-moderation conduct would create binding circuit precedent directly relevant to platform liability frameworks left open after *Moody v. NetChoice*, 603 U.S. 707 (2024); the court's treatment of the spoliation-mootness question could likewise determine whether Rule 37(e) has any practical force against defendants who complete evidence destruction before a ruling issues.
View on CourtListener →Doe v. X Corp.
Issue: Whether the "produced by force, fraud, misrepresentation, or coercion" exception to 15 U.S.C. § 6851(b)(4)(A)'s commercial-pornography exclusion encompasses a third party's unauthorized copying and reposting of consensually created commercial pornographic content—thereby imposing liability on X Corp. and xAI Corp. for hosting and using that content—and whether § 230(c)(1) independently bars such claims.
Why It Matters: This decision establishes that platforms sharing user-uploaded content with AI training systems do not face liability under the federal NCII statute for third-party-posted commercial pornography, and it reinforces a narrow reading of § 230's intellectual property exception that preserves broad platform immunity for privacy-based tort claims—potentially shielding AI developers like xAI from statutory damages when they receive content from platform partners rather than directly from tortious actors.
View on CourtListener →