Anthropic PBC v. U.S. Department of War
Issue
Whether the Pentagon's designation of Anthropic as a "supply chain risk" under 10 U.S.C. § 3252—imposed because Anthropic refused to remove safety guardrails from its Claude AI systems to enable fully autonomous weapons development and mass domestic surveillance—constitutes unconstitutional compelled speech, viewpoint-based retaliation, and coercion in violation of the First Amendment.
What Happened
Anthropic filed suit against the U.S. Department of War in the Northern District of California, and moved for a temporary restraining order, preliminary injunction, or stay under APA § 705. Five civil liberties and technology-policy organizations (FIRE, EFF, Cato Institute, Chamber of Progress, and FALA) filed this amicus brief in support of that motion. Amici argue two independent First Amendment violations: first, that Anthropic's editorial choices embedded in Claude's design and usage policies constitute protected expressive conduct, such that the government's demand to alter those outputs amounts to compelled speech; and second, that the supply chain risk designation is facially retaliatory, as senior Pentagon officials publicly stated the sanction was intended to punish Anthropic's "ideology" and make room for more "patriotic" contractors. Amici further argue that permitting government-directed AI deployment for domestic surveillance raises independent First Amendment concerns, including chilling effects on the broader public.
Why It Matters
This filing presents what appears to be the first judicial test of whether an AI developer's system-level safety design choices—training protocols, usage policies, and output restrictions—qualify as protected expressive conduct under the First Amendment, potentially extending the *Moody v. NetChoice* editorial-discretion framework to generative AI architecture. If the court credits the compelled-speech and retaliation theories at the TRO stage, it could meaningfully constrain the government's ability to use procurement and supply chain authorities as leverage to dictate AI safety standards.
Related Filings
Other proceedings in the same litigation tracked by this monitor.
How accurate was this summary?