Browse Cases

207 results
Brief First Amendment Complaint

Canady v. Meta Platforms, Inc.

District Court, N.D. California · 2026-03-11 · Meta Platforms, Inc. (Facebook)

Issue: Whether Meta Platforms and Luxottica violated the federal Wiretap Act (18 U.S.C. § 2511(1)(a)), the California Invasion of Privacy Act, the California UCL and CLRA, and New York GBL §§ 349 & 350 by covertly capturing audiovisual recordings through AI-enabled smart glasses and transmitting them to third-party human reviewers without users' knowledge or consent, contrary to Defendants' affirmative privacy representations.

Why It Matters: This complaint represents an early consumer class action theory applying federal wiretap law and state consumer protection statutes to AI-enabled wearable hardware, testing whether affirmative privacy marketing claims create actionable liability when a device's actual data-collection practices—including undisclosed human review of intimate recordings for AI training—materially diverge from those representations; the case may signal how courts will assess deceptive-advertising and interception claims in the consumer AI hardware context.

View on CourtListener →
Brief AI Liability Complaint

Angwin v. Superhuman Platform, Inc.

District Court, S.D. New York · 2026-03-11 · Superhuman Platform, Inc.

Issue: Whether Superhuman Platform, Inc.'s use of real journalists' and authors' names and AI-generated writing feedback attributed to those individuals in its commercial "Expert Review" tool, without their consent, constitutes actionable misappropriation of identity under California's common law right of publicity, California Civil Code § 3344, New York Civil Rights Law § 50, and the common law doctrine of unjust enrichment.

Why It Matters: This complaint directly tests whether an AI product developer incurs right-of-publicity liability when it uses real individuals' names and scraped public work to generate and commercially market AI-simulated advice attributed to those individuals—a fact pattern that existing right-of-publicity doctrine has not clearly addressed in the AI context. The outcome could establish whether consent requirements under California Civil Code § 3344 and New York Civil Rights Law § 50 apply to AI-generated persona emulation used as a commercial feature, potentially setting a significant precedent for how AI companies may lawfully incorporate real people's identities into monetized products.

View on CourtListener →
Brief AI Liability First Amendment Complaint

Fricker v. Fireflies.AI Corp.

District Court, N.D. Illinois · 2026-03-10 · Fireflies.AI Corp.

Issue: Whether Fireflies.AI Corp. violated §§ 15(a) and 15(b) of the Illinois Biometric Information Privacy Act (BIPA), 740 ILCS 14/1 et seq., by automatically collecting and retaining voiceprints of virtual meeting participants who never consented to or contracted with the AI transcription service, without publishing a biometric data retention policy or obtaining written informed consent prior to collection.

Why It Matters: This case raises a potentially significant question about AI transcription services' BIPA obligations toward non-consenting third-party participants — individuals who never interacted with the platform but whose biometric data was nonetheless captured through another user's account — which could broaden the class of plaintiffs who may assert BIPA claims against AI-enabled data collection tools well beyond the contracting user base. If the court adopts Plaintiff's theory, it would signal that AI meeting assistants must obtain affirmative consent not only from subscribing account holders but from every meeting participant whose voice is processed for speaker identification, substantially increasing compliance burdens for the rapidly growing AI productivity-tool sector.

View on CourtListener →
First Amendment

Anthropic PBC v. United States Department of War

Court of Appeals for the D.C. Circuit · 7 filings
2026-03-09 · Other

Why It Matters: This brief pushes the D.C. Circuit toward a significant and unresolved doctrinal question: whether the First Amendment protects not just a developer's written governance documents — which fit comfortably within existing editorial-judgment precedent — but also the design choices embedded in an AI system itself. The retaliation theory, grounded in publicly documented government hostility toward Anthropic's expressed values, is the brief's most legally orthodox argument and tracks the *Vullo* playbook closely enough to warrant serious merits attention. If the D.C. Circuit reaches the AI-expression question, whatever it says will carry substantial weight in future disputes over government leverage over AI developers' product decisions — a dynamic that extends well beyond the procurement context.

View on CourtListener →
2026-03-09 · Other

Why It Matters: This case tests whether courts will apply standard APA arbitrary-and-capricious review — including its requirement that agencies follow their own statutory sequence and engage with contrary factual evidence — to national-security procurement decisions that agencies have historically shielded from meaningful judicial scrutiny. The procedural-inversion argument, if accepted, would establish that even the § 4713 emergency carve-out has real limits when the record reflects self-induced urgency, a holding with broad implications for how agencies invoke national-security exigencies to bypass procedural requirements. The First Amendment retaliation theory is the brief's most novel and contested contribution: if the D.C. Circuit reaches it, the case could clarify whether *Vullo*'s government-coercion framework extends to procurement exclusions where agency officials have publicly disparaged a contractor's expressive advocacy, a question with significant consequences for AI companies whose public policy positions increasingly put them in tension with government clients.

View on CourtListener →
2026-03-09 · Other

Why It Matters: This brief is worth watching primarily because of its unconstitutional conditions framing: by grounding the First Amendment claim in the government-wide scope of the ban rather than the original contract dispute, TPAF gives the D.C. Circuit a doctrinal hook — rooted in *Alliance for Open Society* rather than the more government-favorable *Rust v. Sullivan* — that does not require the court to resolve whether an AI company's values statements and its product functionality are legally separable. That question is genuinely open: no court has squarely addressed whether a national-security procurement statute can support a cross-agency blacklist when the designated "risk" is a contractor's public advocacy about permissible uses of its own technology. The statutory misapplication argument, while creative, turns on whether courts will read § 4713's supply-chain-risk authority as limited to intentional adversarial actors — a reading the government can contest — making the First Amendment theory the stronger vehicle for Petitioner's relief.

View on CourtListener →
2026-03-09 · Other

Why It Matters: This case presents a potentially novel question of whether FASCSA's national-security supply-chain designation authority—previously applied only to foreign entities—can be used against a domestic AI contractor, and whether such use triggers First Amendment scrutiny as government-compelled alteration of an expressive AI product or retaliation for a company's negotiating position, which could significantly constrain executive procurement power over AI developers.

View on CourtListener →
2026-03-09 · Appellate Opinion

Why It Matters: This filing presents what may be the first appellate-level First Amendment challenge to government action coercing an AI developer to modify its model's content and safety constraints, directly testing whether an AI system's trained outputs and a developer's usage policies constitute protected speech and editorial judgment under *Moody v. NetChoice*; the court's resolution could establish whether and how the First Amendment limits the government's ability to condition procurement relationships on an AI company's willingness to remove safety guardrails.

View on CourtListener →
2026-03-09 · Other

Why It Matters: This petition presents a rare test of the judicial review mechanism established by FASCSА for supply chain exclusion actions targeting an AI developer, potentially establishing how constitutional claims — including First Amendment challenges — may be raised against national security-justified procurement exclusions of AI companies under § 4713's otherwise heavily restricted review framework.

View on CourtListener →
2026-03-09 · Appellate Opinion

Why It Matters: This motion presents what appears to be the first judicial challenge to a § 4713 supply-chain-risk designation issued against an American AI developer, and potentially the first such designation against any domestic company, raising novel questions about the statute's procedural floors and whether the government may weaponize national-security procurement authority to coerce AI developers into removing safety guardrails on their models. If the D.C. Circuit reaches the First Amendment retaliation claim, its ruling could significantly extend *Vullo*'s coercion doctrine into the AI-regulation context, constraining the government's ability to use contracting and debarment powers as leverage against companies that publicly resist demands to alter AI safety policies.

View on CourtListener →
First Amendment

COALITION FOR INDEPENDENT TECHNOLOGY RESEARCH v. RUBIO

District Court, District of Columbia · 3 filings
Amicus Brief Amicus Brief
2026-03-09 · Preliminary Injunction

Why It Matters: This case sits at the frontier of a rapidly developing conflict over whether the government may use investigative or regulatory pressure to punish researchers and civil society groups for influencing how platforms moderate content — a question the Supreme Court skirted rather than resolved in *Murthy v. Missouri* last term. A ruling granting even interim relief could constrain the current administration's ability to deploy such pressure against academics and NGOs who study or critique platform content decisions, making the preliminary injunction proceeding consequential well beyond the parties before the court. EFF's brief also implicitly surfaces two unresolved doctrinal questions: whether civil society actors engaged in advocacy-to-intermediaries hold cognizable First Amendment retaliation claims, and whether *Moody*'s recognition of platform editorial rights generates derivative injury for third parties whose work informs that editorial discretion. The brief's most significant vulnerability is its failure to engage *Murthy*, which erected substantial standing and traceability barriers that the Coalition plaintiffs must clear and that the government is virtually certain to invoke in opposition.

View on CourtListener →
Amicus Brief Amicus Brief
2026-03-09 · Preliminary Injunction

Why It Matters: No court has clearly resolved what constitutional status attaches to the ecosystem of civil society intermediaries — researchers, NGOs, platform accountability groups — when the government uses administrative tools, funding threats, or public condemnation to pressure or penalize them for their work on platform governance. If a court credits EFF's framing, even in dictum, it could establish a meaningful precedential foothold limiting the government's ability to chill independent technology research through means short of direct censorship. The case also sits at a relatively uncharted intersection of APA Section 705 stay doctrine and First Amendment injury, and could generate useful law on what constitutes irreparable harm in speech-chilling contexts under the APA. The brief is most significant not for the questions it answers, but for the ones it forces a federal court to confront directly.

View on CourtListener →
Amicus Brief Amicus Brief
2026-03-09 · Preliminary Injunction

Why It Matters: This brief attempts to construct a legal framework around a genuinely unresolved constitutional question: whether the government can indirectly suppress independent platform oversight by pressuring the researchers and advocates who feed into editorial decisions, without ever issuing a direct order to a platform. If a court accepts even part of EFF's reasoning, it could generate persuasive authority for a nascent doctrine protecting content moderation ecosystem participants—academics, digital rights organizations, and journalism outlets—from government retaliation as a class. That outcome would matter well beyond this case, as congressional and executive pressure on platforms and the researchers who study them continues to intensify. The argument is a plausible but meaningful extension of *Vullo* and *Moody*, neither of which addressed third-party intermediaries, making the court's receptiveness to expansive reading of those precedents the central variable to watch.

View on CourtListener →
First Amendment

Anthropic PBC v. U.S. Department of War

District Court, N.D. California · 7 filings
2026-03-09 · Preliminary Injunction

Why It Matters: This case presents a direct application of the government-coercion/retaliation doctrine — rooted in Bantam Books, Backpage v. Dart, and NRA v. Vullo — to an AI developer being punished by the Executive Branch for its expressed views on AI safety policy, extending the jawboning framework beyond platform moderation contexts to government contracting retaliation against a major AI company. If the court grants the injunction, it will be a significant precedent establishing First Amendment limits on the government's use of procurement and supply-chain authority to punish AI companies for their public policy positions and product design choices.

View on CourtListener →
2026-03-09 · Other

Why It Matters: This filing suggests Anthropic is advancing a jawboning or compelled-speech theory — that government threats to commandeer its AI technology to override the company's own usage restrictions constitute unconstitutional coercion — which, if accepted, could establish significant precedent delimiting the government's ability to conscript private AI systems for military or surveillance purposes against a developer's stated objections.

View on CourtListener →
2026-03-09 · Other

Why It Matters: This case presents a novel question of whether the government can use its contracting power to coerce an AI developer into removing self-imposed safety restrictions on a deployed model, potentially setting precedent on both unconstitutional conditions doctrine as applied to AI policy restrictions and the extent to which an AI company's usage policies constitute protected editorial or expressive conduct under the First Amendment.

View on CourtListener →
2026-03-09 · Other

Why It Matters: This declaration is significant because it presents a factual record for a court to evaluate whether the executive branch may use national-security-adjacent administrative designations as an instrument to coerce private companies and their business partners — raising potential First Amendment retaliation and unconstitutional conditions questions in the context of AI developers. If the court reaches the merits, its analysis of whether a "supply chain risk" designation can be applied to a domestic AI company could establish important limits on executive authority over AI procurement and signal the degree to which AI developers retain legal recourse against government-directed commercial exclusion.

View on CourtListener →
2026-03-09 · Complaint

Why It Matters: This case presents a novel First Amendment retaliation theory applied directly to a government AI procurement dispute, potentially establishing whether an AI developer's public statements about its model's safety limitations constitute protected speech that constrains the government's exercise of its contracting and national-security designation powers. A ruling on the merits could also define the procedural and substantive limits of 10 U.S.C. § 3252 supply-chain risk exclusions as applied to AI vendors, with significant implications for how AI companies may lawfully restrict government use of their systems.

View on CourtListener →
2026-03-09 · Other

Why It Matters: This filing is significant as an early test case for whether federal national security procurement authorities can be used to coerce AI developers into removing safety restrictions on military and surveillance applications, potentially establishing limits on the government's ability to weaponize supply-chain exclusion powers against domestic technology companies that publicly advocate for AI guardrails.

View on CourtListener →
2026-03-09 · Other

Why It Matters: This filing presents what appears to be the first judicial test of whether an AI developer's system-level safety design choices—training protocols, usage policies, and output restrictions—qualify as protected expressive conduct under the First Amendment, potentially extending the *Moody v. NetChoice* editorial-discretion framework to generative AI architecture. If the court credits the compelled-speech and retaliation theories at the TRO stage, it could meaningfully constrain the government's ability to use procurement and supply chain authorities as leverage to dictate AI safety standards.

View on CourtListener →