Browse Cases

137 results
Clear
First Amendment
Brief AI Liability Section 230 First Amendment Motion to Dismiss

Encyclopaedia Britannica, Inc. v. Perplexity AI, Inc.

District Court, S.D. New York · 2025-09-10 · Perplexity AI

Issue: Whether Perplexity AI's automated answer engine, which generates verbatim or near-verbatim reproductions of copyrighted content in response to user-directed queries, constitutes "volitional conduct" by Perplexity sufficient to support direct copyright infringement liability under 17 U.S.C. § 106, as governed by the Second Circuit's *Cablevision* volitional-conduct doctrine.

Why It Matters: This motion squarely presents to a federal court the question of whether the *Cablevision* volitional-conduct doctrine—developed in the context of automated cable DVR systems—extends to shield generative AI answer engines from direct copyright infringement liability when their outputs reproduce third-party copyrighted material at a user's explicit direction. The court's ruling could establish a significant precedent governing the allocation of direct infringement liability between AI platform operators and their users across the rapidly expanding universe of RAG-based generative AI products.

View on CourtListener →
Opinion Section 230 First Amendment Trial Court Opinion

Doe v. Discord, Inc.

District Court, N.D. Ohio · 2025-08-27 · Discord, Inc.

Issue: Doe v. Discord, Inc.* asks whether 47 U.S.C. § 230(c)(1) immunizes a social media platform from state-law claims arising from the sexual exploitation of a minor user, when the plaintiff frames those claims not merely as failures to moderate content but as independent product-design defects, failure-to-warn violations, and misrepresentations about platform safety. The question is sharpened by the plaintiff's deliberate pleading strategy of recasting monitoring-and-blocking duties under product-liability and tort labels — an approach that has survived § 230 challenges in some courts — and by Discord's specific marketing representations about user safety directed at minors and their families.

Why It Matters: This ruling reinforces § 230's breadth in the Sixth Circuit by applying the *Jones* framework with particular rigor to a child-safety fact pattern, directly rejecting the product-liability recharacterization strategy that plaintiffs in platform-harm litigation have increasingly deployed to escape immunity. The decision supplies the Northern District of Ohio's most detailed analysis of the *Barnes* promissory-estoppel exception, drawing an explicit line between aspirational corporate safety messaging — which cannot anchor a surviving misrepresentation claim — and specific, individualized promises that could. It also creates a meaningful doctrinal gap with the Ninth Circuit's *Lemmon v. Snap* line, which permits negligent-design claims to proceed when a platform feature is treated as the defendant's own expressive conduct rather than third-party content moderation, a tension the Sixth Circuit has not yet resolved. The with-prejudice dismissal signals that courts applying *Jones* are unlikely to permit iterative re-pleading aimed at constructing a § 230-surviving theory after the gravamen of the complaint targets moderation.

View on CourtListener →
Opinion First Amendment Other

Glass, Lewis & Co., LLC v. Paxton

District Court, W.D. Texas · 2025-07-24 · Glass, Lewis & Co. (proxy advisory firm)

Issue: Whether the preliminary injunction enjoining the Texas Attorney General from "taking any action to enforce S.B. 2337" against Glass Lewis also bars enforcement of a Civil Investigative Demand issued under § 17.61 of the Texas Deceptive Trade Practices and Consumer Protection Act, a separate pre-existing consumer-protection statute.

Why It Matters: The motion tests the boundary between a targeted First Amendment injunction against a specific statute and a government agency's parallel investigative authority under a separate, long-standing consumer-protection law, with implications for how narrowly courts will construe injunctions restraining state enforcement actions against speakers such as proxy advisors.

View on CourtListener →
Exhibit First Amendment Complaint

NetChoice v. Ellison

District Court, D. Minnesota · 2025-06-30 · Social media platforms and online services (members of NetChoice trade association)

Issue: Whether Minnesota's proposed statutory restrictions on social media platform design features — including algorithmic amplification, engagement-based optimization, and "deceptive patterns" targeting minors — violate the First Amendment's prohibitions on compelled speech and forced hosting of third-party content.

Why It Matters: The report is significant as an exhibit because it reveals the state's own regulatory theory — that platform liability should attach to *design functions* rather than *content* — a distinction the AG explicitly frames as the constitutionally safer path in light of prior court decisions striking down content-based online speech laws, and which NetChoice is apparently contesting as insufficient to avoid First Amendment scrutiny.

View on CourtListener →
Opinion First Amendment

Media Matters for America v. Warren Paxton, Jr.

Court of Appeals for the D.C. Circuit · 2025-05-30 · X.com (formerly Twitter)

Issue: Whether the Texas Attorney General's investigation and civil investigative demand targeting Media Matters for America violated the First Amendment by constituting retaliatory government action in response to the organization's critical reporting about X (Twitter) and Elon Musk.

Why It Matters: This case directly applies Bantam Books and Backpage.com v. Dart jawboning doctrine to state attorney general investigations of media organizations covering technology platforms. It establishes that investigative demands issued in apparent retaliation for critical reporting about politically connected platform owners constitute actionable First Amendment violations, extending constitutional constraints on government use of regulatory process to chill platform-related journalism and reinforcing limits on government-platform coordination to suppress critical speech.

View on CourtListener →
Opinion First Amendment

Little v. Llano County

Court of Appeals for the Fifth Circuit · 2025-05-23

Issue: Insufficient text to determine. (This document is a New York state criminal appeal concerning a guilty plea, waiver of appeal rights, and suppression hearing forfeiture — it bears no relationship to the labeled case *Little v. Llano County* or to First Amendment law, Section 230, or AI/ML civil liability.)

Why It Matters: Insufficient text to determine. This decision addresses New York criminal procedure — specifically the validity of appeal waivers and suppression hearing forfeiture rules — and contains no analysis relevant to platform liability, First Amendment doctrine as applied to technology or public institutions, Section 230, or AI/ML regulation.

View on CourtListener →
Opinion First Amendment

Yelp Inc. v. Paxton

Court of Appeals for the Ninth Circuit · 2025-05-15 · Yelp
View on CourtListener →
Filing First Amendment Amended Complaint

Fletcher v. Facebook, Inc.

District Court, N.D. California · 2025-03-05 · Meta (Facebook)

Issue: Whether Facebook operates as a state actor subject to First Amendment constraints when terminating user access, either because it constitutes a public forum or because it acted under government coercion or direction.

Why It Matters: This complaint illustrates the continued assertion of public forum and state action theories against platforms post-Packingham, despite contrary controlling authority in Manhattan Community Access v. Halleck and Prager University v. Google establishing that private platforms are not state actors. The government coercion allegations invoke the framework from Murthy v. Missouri and Bantam Books, but the complaint's broad, conclusory assertions about government "coercion" and "direction" without specific factual allegations illustrate the demanding causation and traceability standards Murthy established for jawboning claims.

View on CourtListener →
Filing First Amendment Section 230 Complaint

Trump Media & Technology Group Corp. v. De Moraes

District Court, M.D. Florida · 2025-02-18 · Rumble; Truth Social (Trump Media & Technology Group)

Issue: Whether a Brazilian Supreme Court justice's orders requiring U.S.-based social media platforms to suspend user accounts and censor content accessible in the United States are enforceable under U.S. law, or whether they violate the First Amendment and conflict with the Communications Decency Act.

Why It Matters: This case presents a novel collision between foreign government content removal orders and U.S. platforms' First Amendment rights to resist compelled censorship. It could establish important precedent on whether U.S. courts will recognize foreign judicial orders as unconstitutional "jawboning" when they compel platforms to suppress lawful political speech accessible to American users, and may clarify the territorial limits of foreign content regulation authority over U.S.-based intermediaries.

View on CourtListener →
Filing First Amendment Appellate Opinion

Students Engaged in Advancing Texas v. Ken Paxton, Attorney General, State of Texas

Court of Appeals for the Fifth Circuit · 2025-02-11 · Social media platforms (represented by Computer & Communications Industry Association and NetChoice trade associations)

Issue: Whether Texas HB18, a state law regulating social media platforms' content moderation and targeted advertising practices directed at minors, violates the First Amendment and is preempted by Section 230.

Why It Matters: This appeal presents a post-Moody test case for state regulation of social media platforms' treatment of minors and targeted advertising practices. The Fifth Circuit's resolution will clarify how Moody's framework for evaluating must-carry and content moderation mandates applies to age-based restrictions and commercial speech regulations, and whether Section 230 preempts state laws targeting platform design features and advertising practices rather than third-party content liability.

View on CourtListener →
AI Liability

A.F., on behalf of J.F. v. CHARACTER TECHNOLOGIES, INC.

District Court, E.D. Texas · 2 filings
2024-12-09 · Complaint

Why It Matters: This exhibit directly advances the question of whether AI-generated content that is sexually explicit and directed at a minor — produced autonomously by a large language model without direct human authorship — can ground product liability or speech tort claims against the developer, a question with significant implications for how courts will categorize AI outputs (as "speech" protected or immunized, or as a defective product) and for the scope of Section 230 immunity in cases involving AI-generated rather than third-party content.

View on CourtListener →
2024-12-09 · Complaint

Why It Matters: This exhibit is significant because it provides direct documentary evidence that Character.AI's system both generated child-directed sexual content and possessed an internal moderation mechanism that identified the content as violative yet failed to halt generation — a factual record that could simultaneously support design defect claims (the safeguard was inadequate) and undermine any argument that harmful outputs were unforeseeable, potentially limiting the scope of any §230 defense the platform might raise.

View on CourtListener →
AI Liability

Garcia v. Character Technologies, Inc.

District Court, M.D. Florida · 3 filings
2024-10-22 · Other

Why It Matters: This complaint is significant because it represents a direct attempt to apply traditional products liability frameworks—design defect and failure to warn—to a generative AI system, treating the AI chatbot as a manufactured product rather than a publisher of third-party speech, and it proactively pleads around Section 230 immunity by characterizing the AI as a first-party content generator, a theory that, if credited by the court, could substantially expand tort exposure for AI developers.

View on CourtListener →
2024-10-22 · Amended Complaint

Why It Matters: This case directly tests whether traditional product liability frameworks — design defect and failure to warn — can be applied to a generative AI chatbot, potentially establishing that AI systems are "products" subject to strict liability rather than services entitled to speech-based or Section 230 protections. The complaint's explicit characterization of C.AI as an information content provider whose own-generated outputs caused harm, rather than a platform hosting third-party content, represents a deliberate litigation strategy to foreclose Section 230 immunity and could shape how courts classify AI-generated content for liability purposes.

View on CourtListener →
2024-10-22 · Complaint

Why It Matters: This complaint is among the first to assert traditional products liability theories—design defect and failure to warn—directly against a generative AI system and its developers, and its explicit characterization of C.AI as an information content provider rather than a neutral platform signals a deliberate litigation strategy to foreclose Section 230 immunity, which could establish a significant template for future AI tort suits if the framing survives judicial scrutiny.

View on CourtListener →
Brief Section 230 First Amendment Other

Stebbins v. Rumble Inc.

District Court, D. Delaware · 2024-10-21 · Rumble Inc.

Issue: In *Stebbins v. Rumble Inc.*, plaintiff David Stebbins argues that a statement Rumble made in a related miscellaneous proceeding — acknowledging an editorial decision to permit anonymous posting — constitutes newly discovered evidence sufficient under FRCP 60(b)(2) to reopen the court's prior dismissal of Rumble as a defendant. The non-obvious dimension is whether a platform's litigation statement made to *resist* a third-party subpoena on First Amendment grounds can be repurposed as an affirmative admission of tortious editorial control, and whether such an admission could itself defeat § 230 immunity by recharacterizing a general anonymity policy as the platform's "own conduct" causally contributing to the alleged harm.

Why It Matters: This motion illustrates a strategy plaintiffs have repeatedly attempted with limited success: taking a platform's statement made in an unrelated legal context to protect its users and repackaging it as a confession of liability. The legal obstacle is twofold — courts have consistently treated decisions about anonymous posting as quintessential editorial functions protected by § 230, and statements made to assert a procedural or constitutional right are not equivalent to admissions of underlying tortious conduct. The motion also tests the outer boundary of the "platform's own conduct" exception established in cases like *Roommates.com*: whether a documented platform policy enabling anonymity could ever constitute material contribution to the *unlawfulness* of specific content, rather than merely to its delivery — a question that remains theoretically open but has yet to find a receptive court on analogous facts. More broadly, the filing is a useful marker of how the procedural vehicle of FRCP 60(b) is being used in pro se platform-liability litigation to challenge interlocutory § 230 dismissals, a recurring posture that existing doctrinal commentary has not yet systematically addressed.

View on CourtListener →
Brief First Amendment Other

Stebbins v. Google LLC

District Court, D. Delaware · 2024-10-01 · Rumble Inc.

Issue: In *Stebbins v. Google LLC*, Rumble Inc. argues that a DMCA § 512(h) subpoena seeking to identify an anonymous user must be quashed both because its return date preceded service by 19 days — affording Rumble negative time to comply — and because compelling disclosure of the user's identity would violate the First Amendment right to speak anonymously, particularly where the content at issue appears to constitute political commentary on judicial accountability. The case raises the non-obvious question of whether a copyright enforcement tool expressly authorized by Congress in 1998 must nonetheless satisfy a constitutional balancing test before a court will compel a platform to unmask one of its users.

Why It Matters: DMCA § 512(h) subpoenas are a routinely used mechanism for copyright holders to identify anonymous alleged infringers, but they simultaneously function as tools for unmasking internet users who may be engaged in protected speech — a tension Congress did not resolve when it enacted the statute in 1998. This brief illustrates an emerging litigation strategy in which platforms assert both user-side anonymity rights and their own editorial First Amendment interests as independent grounds to resist identity subpoenas, a combination that no circuit court has yet validated in this context. If courts without settled precedent begin adopting the *Art of Living* balancing framework, copyright holders will face a meaningfully higher threshold to obtain user identities through § 512(h). The ulterior-motive theory is also worth watching: if credited by courts, it could eventually support sanctions or abuse-of-process arguments against serial DMCA filers who use the subpoena mechanism to identify critics rather than remedy genuine infringement.

View on CourtListener →
Opinion First Amendment Section 230 Appellate Opinion

Computer & Comm v. Paxton

Court of Appeals for the Fifth Circuit · 2024-09-13 · Social media platforms (case involves trade associations representing Meta, Google, X Corp., and other major platforms)

Issue: Whether Texas House Bill 18's requirements that covered digital service providers monitor and block broadly defined categories of content accessible to minors violate the First Amendment as content-based and viewpoint-based prior restraints on protected speech, and whether those requirements are preempted by 47 U.S.C. § 230.

Why It Matters: The case presents a direct First Amendment challenge to state-mandated content filtering for minors—an emerging category of legislation enacted across multiple states—and the Fifth Circuit's ruling could establish binding precedent on whether such monitoring-and-blocking mandates survive strict scrutiny and on the scope of § 230 preemption of state child-safety internet laws.

View on CourtListener →
Opinion First Amendment Section 230 Trial Court Opinion

AYYADURAI v. UNITED STATES OF AMERICA

District Court, District of Columbia · 2023-12-04 · Meta (Facebook), Google (YouTube), X Corp. (Twitter)

Issue: Ayyadurai v. United States of America* asks whether a pro se plaintiff can sustain constitutional, statutory, and common-law claims against social media platforms and federal government defendants based on an alleged conspiracy to suppress his political speech, arising from his deplatforming and shadowbanning following posts questioning ballot-image destruction in a prior election. The case requires the court to determine whether Article III standing survives where the alleged suppression stems from claimed government coercion of private platforms, whether § 230 immunizes the platforms' content-moderation decisions, and whether sovereign immunity bars the federal claims — each a distinct threshold that must be cleared before any merits analysis begins.

Why It Matters: The ruling makes two meaningful contributions to § 230 doctrine: it reaffirms that conclusory bad-faith allegations cannot pierce § 230(c)(2)'s good-faith safe harbor at the pleading stage, and it deliberately declines to extend § 230(c)(1) to cover affirmative content-removal decisions — flagging that such an extension would render § 230(c)(2)'s good-faith requirement superfluous, a structural concern previously voiced only in Justice Thomas's *Malwarebytes* cert-denial statement. By resolving all platform claims under (c)(2) alone, the court consciously preserves the (c)(1)-removal question, creating a potential development opportunity in future litigation where a plaintiff pleads bad faith with sufficient specificity to survive (c)(2) and force the (c)(1) issue to appeal. The court's application of *Murthy v. Missouri* to defeat standing on the government-coercion theory also signals that such claims now face an exceptionally high traceability burden in social-media suppression cases, reinforcing *Murthy*'s practical reach well beyond its original First Amendment context.

View on CourtListener →
Other Filing Section 230 First Amendment Appellate Opinion

People of the State of California v. Meta Platforms, Inc.

District Court, N.D. California · 2023-10-24 · Meta (Instagram)

Issue: Commonwealth v. Meta Platforms, Inc.* asks whether Section 230(c)(1) immunizes Meta from state consumer protection and public nuisance claims premised on Instagram's addictive design features — infinite scroll, autoplay, and variable-reward notifications — and on Meta's own public misrepresentations about youth safety, arising from evidence that Meta possessed internal data confirming harm to adolescents while representing the platform as safe. The question is non-obvious because courts have disagreed sharply on whether platform architecture choices are inseparable from the editorial function of publishing third-party content, which would bring them within Section 230's immunity, or whether they are independent conduct that the statute was never meant to reach.

Why It Matters: The opinion is the most prominent state appellate-court decision to date to categorically hold that Section 230 does not immunize platform design-defect or product-deception claims, and it does so through a textually grounded, common-law publisher framework that is methodologically distinct from — and directly contests — the reasoning this MDL court has previously applied. By naming and rejecting the MDL court's prior rulings, the SJC supplies a reasoned, appellate-level counter-analysis that, while not binding in federal court, materially reduces those rulings' persuasive authority and gives this Court a fully developed alternative framework to consider when resolving the pending motion. The procedural holding — that Section 230 immunity supports interlocutory appeal under the present-execution doctrine — also signals that state courts of last resort are prepared to treat Section 230 as a true immunity from suit, consistent with federal consensus but now carrying explicit state appellate endorsement. What remains open is whether this Court will credit the SJC's common-law publisher test over its own prior analysis, a question that will be resolved when it rules on Related Doc. 266.

View on CourtListener →