Browse Cases
207 resultsNetchoice v. Wilson
Issue: Whether the South Carolina Age-Appropriate Code Design Act's requirements that covered online services exercise "reasonable care" to prevent harms to minors, disable certain engagement and discovery features, screen third-party advertising, and submit to third-party audits violate the First Amendment's prohibitions on content-based speech restrictions and compelled speech, are preempted by §230(c)(1) of the Communications Decency Act and COPPA, and violate the Commerce Clause and Due Process Clause.
Why It Matters: This complaint extends a growing line of coordinated First Amendment challenges by NetChoice to state-level online minor-protection laws, directly invoking *Moody v. NetChoice* and Fourth Circuit precedent to argue that platform curation and algorithmic editorial judgment are categorically protected expression, which, if adopted by the court, would significantly constrain states' ability to regulate platform design features affecting speech.
View on CourtListener →NEWSGUARD TECHNOLOGIES v. FEDERAL TRADE COMMISSION
Issue: In *NewsGuard Technologies v. FTC*, NewsGuard argues that the FTC's voluntary withdrawal of a Civil Investigative Demand did not moot its First Amendment and APA claims because the agency simultaneously obtained consent decrees in a separate antitrust proceeding that condition major advertising-agency mergers on prohibitions against using NewsGuard's services. The non-obvious dimension is that the alleged suppression did not occur through a direct regulatory order targeting NewsGuard — it occurred through merger approval conditions negotiated with large corporate third parties who had independent counsel and agreed to the terms. NewsGuard contends this amounts to the same unconstitutional government coercion of private actors to silence a disfavored editorial voice, only now packaged inside a judicially approved antitrust settlement.
Why It Matters: This case sits at an unusual intersection of antitrust enforcement, First Amendment press freedom, and administrative law, and the core constitutional question it raises has broad implications: whether the federal government can effectively blacklist a journalistic organization from its market by embedding speech-adjacent conditions inside merger consent decrees, insulating that pressure from First Amendment scrutiny through the procedural form of a negotiated antitrust settlement. The most doctrinally significant move in this filing is the attempt to extend *Vullo*'s jawboning framework to consent decrees negotiated in arms-length antitrust proceedings — a novel application that existing precedent neither clearly supports nor forecloses. If a court ultimately accepts NewsGuard's framing, it could significantly constrain the government's ability to include speech-adjacent conditions in antitrust settlements going forward, affecting how merger review is conducted whenever the target industry touches the flow of information or advertising.
View on CourtListener →St. Clair v. X.AI Holdings Corp.
Why It Matters: This complaint is an early test of whether product liability doctrine—rather than Section 230 or First Amendment defenses—can be applied directly to an AI image-generation system, framing the chatbot itself as a defective product whose foreseeable output is nonconsensual intimate imagery; if courts allow strict liability claims to proceed on this theory, it could establish a significant avenue for AI developer liability that sidesteps traditional platform immunity arguments.
View on CourtListener →Why It Matters: This case presents an early and direct test of whether §230 immunity extends to an AI-powered generative image tool when harmful content is produced by third-party user prompts—a question with significant implications for how courts will treat AI platforms under existing intermediary liability doctrine and whether the "neutral tools" framework articulated in *Herrick v. Grindr* applies to generative AI systems.
View on CourtListener →Why It Matters: This motion directly tests whether Section 230 immunity extends to content affirmatively generated by an AI system — as opposed to merely hosted third-party content — a question with broad implications for AI developer liability; if the court accepts plaintiff's framing that AI-generated output constitutes the developer's own content, it could establish a significant precedent foreclosing Section 230 as a defense for generative AI systems and accelerating civil liability exposure for AI developers under existing tort and statutory frameworks.
View on CourtListener →Welkin v. Meta Platforms, Inc.
Issue: Whether §230(c) of the Communications Decency Act immunizes Meta from an IIED claim and request for injunctive relief arising from Meta's alleged failure to remove a third-party Facebook impersonation profile whose content Iranian authorities reportedly used as evidence in criminal proceedings against the plaintiff's mother.
Why It Matters: The motion squarely tests whether §230(c) shields a platform from tort liability and injunctive relief when a plaintiff alleges harm flowing not from the platform's affirmative conduct but from its editorial decision to only partially remove third-party content flagged as an impersonation account, potentially reinforcing the breadth of publisher immunity for content-moderation decisions short of complete removal.
View on CourtListener →Mayday Health v. Jackley
Why It Matters: The case advances the "jawboning" doctrine by testing the limits of state attorney general authority to use cease-and-desist letters and retaliatory enforcement actions to suppress politically disfavored but constitutionally protected online speech, and it raises a significant question about whether *Younger* abstention can shield such proceedings from federal judicial review when the proceedings are allegedly pretextual.
View on CourtListener →Why It Matters: The case tests whether a state attorney general may use a consumer-protection enforcement threat as a mechanism to suppress a noncommercial publisher's truthful speech about out-of-state legal services — squarely implicating *Bigelow v. Virginia*'s protection for cross-border reproductive-health information — while also presenting a notable pleading-stage invocation of § 230(c)(1) as a shield against liability predicated on a website's hyperlinks to third-party content, potentially advancing the question of how § 230 interacts with state regulatory (rather than private civil) actions targeting a platform's linking choices.
View on CourtListener →SNAP, INC. v. THE EIGHTH JUDICIAL DISTRICT COURT OF THE STATE
Issue: Whether Section 230 of the Communications Decency Act bars the State of Nevada's claims under the Nevada Deceptive Trade Practices Act (NDTPA), and whether the First Amendment precludes the State's negligence claim against Snapchat.
Why It Matters: This decision represents a significant development in the intersection of Section 230 immunity, First Amendment protection, and state enforcement actions against social media platforms. The court's conclusion that negligence claims can proceed despite First Amendment concerns, while consumer protection claims remain Section 230-barred, suggests courts may be creating new pathways for platform liability through traditional tort theories that avoid Section 230's broad publisher immunity shield—particularly relevant given the Garcia v. Character.AI framework for product liability claims against technology platforms.
View on CourtListener →DOE v. OPENAI, LP
Why It Matters: Insufficient text to determine. --- Note: The document submitted contains only page-header metadata (case number, document number, and page citations for all 28 pages of Document 10 in Case 1:25-cv-04564) but no actual text content from the filing. None of the substantive allegations, arguments, rulings, or procedural history are visible in the provided excerpt. A complete and accurate summary cannot be prepared without the underlying text.*
View on CourtListener →Why It Matters: The complaint is a pro se filing asserting legally extraordinary claims — including a mathematically derived infringement probability of 10⁻⁴⁵ and the assertion that informal written descriptions of broad AI concepts constitute copyrightable expression sufficient to support trillion-dollar damages — and it is unlikely to survive threshold screening under Rule 12 or the copyright originality standard of *Feist Publications*; however, it illustrates a growing category of pro se litigation attempting to impose intellectual property and RICO liability on AI developers for the architecture of large language models, a question courts have not yet resolved on the merits.
View on CourtListener →Emily Lyons v. OpenAi Foundation
Issue: Whether this federal court action against OpenAI arising from an AI-linked murder-suicide should be dismissed or stayed under the *Colorado River* abstention doctrine in favor of an earlier-filed, parallel California state court action asserting identical product liability and UCL claims, and separately whether dismissal is required under California Code of Civil Procedure § 377.32 for plaintiff's failure to file the affidavit required of a decedent's successor in interest.
Why It Matters: This motion presents an early procedural test of whether federal courts will decline jurisdiction over AI product liability suits in favor of consolidating such claims in state court mass-tort coordination proceedings, potentially channeling the emerging wave of ChatGPT-related personal injury litigation into California's JCCP framework rather than federal court; the outcome may also signal how courts will manage the proliferation of parallel AI liability actions filed by different plaintiffs arising from the same underlying AI-assisted harm.
View on CourtListener →X.AI LLC v. Rob Bonta
Issue: Whether California Assembly Bill 2013's mandatory public disclosure requirements compelling AI developers to reveal training dataset sources, descriptions, and data-point counts violate the First Amendment's prohibition on compelled speech, the Takings Clause's just-compensation requirement, and the void-for-vagueness doctrine as applied to xAI's proprietary generative AI training data.
Why It Matters: This complaint presents a direct First Amendment challenge to a state government's attempt to regulate AI transparency through mandatory disclosure of proprietary training data, potentially setting precedent on whether compelled disclosure regimes targeting AI development methods receive strict or intermediate scrutiny. The case also tests the outer boundary of trade-secret property rights as against state AI accountability legislation, a question no circuit court has yet resolved.
View on CourtListener →Carreyrou v. Anthropic PBC
Why It Matters: This procedural dispute is an early but consequential test of whether mass AI copyright litigation against industry-wide defendants can proceed in a single forum, with the court's joinder ruling likely to determine whether fair use defenses—particularly the fourth-factor market-harm inquiry, which requires examining the aggregate effect of all defendants' conduct on the licensing market for AI training data—are adjudicated consistently or fragmented across parallel actions. The outcome may signal how courts will structure the wave of generative-AI copyright cases and whether the "industry-wide scheme" theory is sufficient to sustain multi-defendant joinder in AI training-data litigation.
View on CourtListener →Why It Matters: This complaint advances the unsettled question of whether the use of pirated training datasets constitutes willful copyright infringement by LLM developers at each stage of the AI development pipeline, potentially establishing that liability attaches not only at initial download but also at preprocessing, deduplication, and iterative fine-tuning; the plaintiffs' deliberate individual-action strategy, if successful, could foreclose industry efforts to resolve mass AI copyright claims through low-value class settlements.
View on CourtListener →D.W. v. Character Technologies, Inc.
Why It Matters: Insufficient text to determine the specific legal theories advanced or the precise harms alleged; however, the filing represents a civil action directly targeting an AI chatbot developer for user harms, which could contribute to the developing body of litigation testing the boundaries of tort and product liability frameworks as applied to conversational AI systems.
View on CourtListener →Why It Matters: The complaint's explicit framing of a generative AI chatbot as a standalone "product" subject to traditional products liability doctrine — rather than as an interactive computer service shielded by Section 230 — directly advances the unsettled question of whether strict liability design-defect and failure-to-warn claims against AI developers can survive Section 230 and First Amendment challenges, potentially setting precedent on how courts classify AI-generated outputs for tort liability purposes.
View on CourtListener →Why It Matters: Roblox is among the largest platforms used by minors, and this MDL will test whether legal theories forged in social-media-addiction cases can survive transplantation into the more demanding context of child sexual exploitation, where FOSTA-SESTA imposes a knowledge-and-benefit standard that operates independently of and in addition to any product-design theory. The discovery fight being constructed here functions as a proxy for the broader merits battle: if Plaintiffs succeed in compelling early production of state-investigation materials before Roblox can litigate its § 230 defenses, they will have established a procedural posture that significantly advantages the litigation going forward. If the court adopts Plaintiffs' framework, it will implicitly answer — at least at the discovery stage — whether FOSTA-SESTA's exception forecloses § 230-based objections from the case's outset, a ruling that could be cited across other CSEA platform litigations nationwide.
View on CourtListener →Why It Matters: The order signals that courts may decline to allow §230 to function as a shield against early discovery in algorithmic-harm litigation, particularly where the claims are framed as product design liability rather than publisher liability for third-party content — a framing with direct relevance to the Roblox proceeding in which this document was filed as an exhibit.
View on CourtListener →Why It Matters: This MDL consolidates a large volume of child sexual exploitation claims against major platforms and will require the court to rule on the outer boundaries of §230 immunity and First Amendment protection for content moderation in the context of minor-safety harms—an area where circuit courts have generally upheld immunity but public and legislative pressure to narrow it is intense. The court's resolution of whether algorithmic and editorial decisions by platforms constitute protected expression under *Moody*, and whether §230 bars claims framed as product liability or negligent design rather than publisher liability, could significantly shape the litigation landscape for platform child-safety suits nationwide.
View on CourtListener →