First Amendment Other

Anthropic PBC v. U.S. Department of War

🏛 U.S. District Court for the Northern District of California · 📅 2026-03-09

Issue

Whether the U.S. Department of War's designation of Anthropic PBC as a "supply chain risk" under 10 U.S.C. § 3252 constitutes unlawful First Amendment retaliation against a private AI developer for maintaining contractual restrictions on its systems' use in domestic mass surveillance and autonomous lethal weapons applications.

What Happened

Anthropic moved for a temporary restraining order after the Pentagon formally designated it a "supply chain risk" in early March 2026, following the company's refusal to remove contractual "red lines" prohibiting use of its AI systems for domestic mass surveillance or fully autonomous lethal targeting. Employees of OpenAI and Google, filing in their personal capacities through the Protect Democracy Project's AI for Democracy Action Lab, submitted this amicus brief in support of Anthropic's TRO motion. The amici argue that the supply chain risk designation — a mechanism historically reserved for foreign adversary-controlled vendors and compromised suppliers under 10 U.S.C. § 3252 — was improperly weaponized as retaliatory punishment for protected speech and contractual safeguards, citing *Hartman v. Moore*, 547 U.S. 250 (2006), for the proposition that the First Amendment prohibits government retaliation against individuals or entities for speaking out.

Why It Matters

This filing is significant as an early test case for whether federal national security procurement authorities can be used to coerce AI developers into removing safety restrictions on military and surveillance applications, potentially establishing limits on the government's ability to weaponize supply-chain exclusion powers against domestic technology companies that publicly advocate for AI guardrails.

Related Filings

Other proceedings in the same litigation tracked by this monitor.