Anthropic PBC v. U.S. Department of War
Reply Brief — Attachment 113
Issue: Whether the Executive Branch violated the First Amendment by retaliating against Anthropic — through an unprecedented supply-chain-risk designation and government-wide blacklisting — because of Anthropic's public advocacy for safe and responsible AI use and its refusal to remove contractual restrictions on use of its AI model for lethal autonomous warfare and mass surveillance.
Anthropic filed suit alleging that after it declined to remove contract terms restricting use of its frontier AI model for lethal autonomous warfare and mass surveillance of Americans, the President issued a directive targeting the company for its purportedly "Radical Left" and "WOKE" views, and the Secretary of War designated Anthropic as a supply-chain risk under 10 U.S.C. § 3252 — the first such designation ever applied to an American company — and ordered defense contractors to immediately cease doing business with Anthropic. In this reply brief in support of its motion for a preliminary injunction, Anthropic argues it is likely to prevail on three grounds: (1) the government's actions constituted First Amendment retaliation targeting protected speech (Anthropic's public advocacy, CEO statements, legislative testimony, and contract negotiating positions); (2) the Secretary's designation and secondary-boycott directive violated the APA as arbitrary, procedurally flawed, and in excess of statutory authority; and (3) the Presidential Directive violated due process and separation of powers by blacklisting Anthropic without statutory or constitutional authority. Anthropic invokes NRA v. Vullo and the three-part retaliation framework, arguing the government's own documents confirm that the adverse actions were expressly motivated by Anthropic's "rhetoric" and "ideology."
This case presents a direct application of the government-coercion/retaliation doctrine — rooted in Bantam Books, Backpage v. Dart, and NRA v. Vullo — to an AI developer being punished by the Executive Branch for its expressed views on AI safety policy, extending the jawboning framework beyond platform moderation contexts to government contracting retaliation against a major AI company. If the court grants the injunction, it will be a significant precedent establishing First Amendment limits on the government's use of procurement and supply-chain authority to punish AI companies for their public policy positions and product design choices.
MOTION for Temporary Restraining Order — Attachment 29
Issue: Whether the U.S. Department of War's threatened use of government powers to compel Anthropic to continue providing Claude technology for military operations — despite Anthropic's desire to cease that relationship — constitutes unconstitutional government coercion in violation of the First Amendment.
Anthropic filed suit in the Northern District of California (Case No. 3:26-cv-01996-RFL), and the document at issue is Exhibit 23 to a filing, consisting of a March 4, 2026 Washington Post news article submitted as evidentiary support. The article reports that the Trump administration banned government agencies from using Anthropic's tools while simultaneously continuing to use Claude embedded in the Pentagon's Maven Smart System for active targeting operations in Iran, and that administration officials stated they would invoke government powers to retain the technology against Anthropic's wishes if CEO Dario Amodei attempted to direct the military to cease use. The article further describes the underlying dispute as stemming from disagreements over Anthropic's terms governing use of Claude in mass domestic surveillance and fully autonomous weapons.
This filing suggests Anthropic is advancing a jawboning or compelled-speech theory — that government threats to commandeer its AI technology to override the company's own usage restrictions constitute unconstitutional coercion — which, if accepted, could establish significant precedent delimiting the government's ability to conscript private AI systems for military or surveillance purposes against a developer's stated objections.
MOTION for Temporary Restraining Order
Issue: Whether the U.S. Department of War may compel Anthropic PBC to strip its AI model Claude of usage-policy restrictions—specifically prohibitions on mass surveillance of Americans and lethal autonomous warfare—as a condition of continued government contracting, implicating First Amendment and compelled-speech doctrine as applied to an AI developer's editorial control over its model's permitted uses.
Anthropic filed suit against the U.S. Department of War in the Northern District of California, and this document is a supporting declaration by Anthropic co-founder and Chief Science Officer Jared Kaplan filed in connection with what appears to be a motion for preliminary relief. Kaplan attests that the Department demanded Anthropic remove its Usage Policy across all existing and future offerings—permitting "all lawful uses" by DoW and its contractors—and delivered an ultimatum that refusal would result in loss of all current and future Department business. Anthropic agreed to shift from a "whitelist" to a "blacklist" approach but refused to eliminate two specific prohibitions: mass surveillance of Americans and lethal autonomous warfare, which Kaplan describes as safety-critical limits grounded in Anthropic's technical judgment about Claude's current capabilities and the inadequacy of existing legal frameworks to address AI-enabled surveillance at scale.
This case presents a novel question of whether the government can use its contracting power to coerce an AI developer into removing self-imposed safety restrictions on a deployed model, potentially setting precedent on both unconstitutional conditions doctrine as applied to AI policy restrictions and the extent to which an AI company's usage policies constitute protected editorial or expressive conduct under the First Amendment.
MOTION for Temporary Restraining Order — Attachment 4
Issue: Whether the federal government's designation of Anthropic PBC as a "supply chain risk" — effectuated through a presidential social media directive, Secretary of War Pete Hegseth's formal letters, and the GSA's removal of Anthropic from USAi.gov — constitutes an unlawful blacklisting that violates Anthropic's constitutional and statutory rights, causing cognizable injury to the company's commercial relationships.
Anthropic PBC filed suit against the U.S. Department of War in the Northern District of California, and this document is a supporting declaration by Anthropic's Chief Commercial Officer, Paul Smith, filed March 9, 2026, in connection with what appears to be a motion for preliminary injunctive relief. Smith attests that on February 27, 2026, President Trump directed all agencies to cease use of Anthropic's AI models, and Secretary Hegseth subsequently designated Anthropic a "supply chain risk" via public posts and formal letters dated March 3, 2026, resulting in Anthropic's removal from federal procurement platforms and cascading terminations by government contractors. The declaration catalogs specific, quantified commercial harms — including lost or imperiled contracts exceeding hundreds of millions of dollars across financial services, healthcare, education, and other industries — and argues that the statutory "supply chain risk" framework applies only to foreign adversaries posing national security threats, not to domestic U.S. companies.
This declaration is significant because it presents a factual record for a court to evaluate whether the executive branch may use national-security-adjacent administrative designations as an instrument to coerce private companies and their business partners — raising potential First Amendment retaliation and unconstitutional conditions questions in the context of AI developers. If the court reaches the merits, its analysis of whether a "supply chain risk" designation can be applied to a domestic AI company could establish important limits on executive authority over AI procurement and signal the degree to which AI developers retain legal recourse against government-directed commercial exclusion.
Complaint
Issue: Whether the federal government's retaliatory termination of contracts, designation of Anthropic as a "Supply-Chain Risk to National Security" under 10 U.S.C. § 3252, and Presidential Directive ordering all agencies to cease use of Anthropic's technology violated the First Amendment's prohibition on government retaliation against protected speech, the Fifth Amendment's Due Process Clause, the APA, and separation-of-powers limits on executive authority.
Anthropic filed a complaint for declaratory and injunctive relief in the N.D. California on March 9, 2026, after the President issued a social-media directive ordering every federal agency to immediately cease use of Anthropic's technology, and the Secretary of War subsequently designated Anthropic a "Supply-Chain Risk to National Security" and barred all military contractors from conducting commercial activity with the company. Anthropic alleges these Challenged Actions were triggered solely by its public refusal to remove usage restrictions prohibiting Claude's deployment for lethal autonomous warfare and mass surveillance of Americans—restrictions the Department of War had previously accepted. Anthropic argues the Secretarial Order and Letter violate 10 U.S.C. § 3252's plain text and required procedures, constitute APA-prohibited arbitrary and capricious agency action, effect unconstitutional First Amendment retaliation under *National Rifle Ass'n of America v. Vullo*, 602 U.S. 175 (2024), deprive Anthropic of property and liberty interests without due process, and exceed any congressionally delegated executive authority. The complaint seeks declarations of unlawfulness and injunctive relief halting implementation of all Challenged Actions.
This case presents a novel First Amendment retaliation theory applied directly to a government AI procurement dispute, potentially establishing whether an AI developer's public statements about its model's safety limitations constitute protected speech that constrains the government's exercise of its contracting and national-security designation powers. A ruling on the merits could also define the procedural and substantive limits of 10 U.S.C. § 3252 supply-chain risk exclusions as applied to AI vendors, with significant implications for how AI companies may lawfully restrict government use of their systems.
Amicus: Employees of OpenAI and Google in Their Persona…
Issue: Whether the U.S. Department of War's designation of Anthropic PBC as a "supply chain risk" under 10 U.S.C. § 3252 constitutes unlawful First Amendment retaliation against a private AI developer for maintaining contractual restrictions on its systems' use in domestic mass surveillance and autonomous lethal weapons applications.
Anthropic moved for a temporary restraining order after the Pentagon formally designated it a "supply chain risk" in early March 2026, following the company's refusal to remove contractual "red lines" prohibiting use of its AI systems for domestic mass surveillance or fully autonomous lethal targeting. Employees of OpenAI and Google, filing in their personal capacities through the Protect Democracy Project's AI for Democracy Action Lab, submitted this amicus brief in support of Anthropic's TRO motion. The amici argue that the supply chain risk designation — a mechanism historically reserved for foreign adversary-controlled vendors and compromised suppliers under 10 U.S.C. § 3252 — was improperly weaponized as retaliatory punishment for protected speech and contractual safeguards, citing *Hartman v. Moore*, 547 U.S. 250 (2006), for the proposition that the First Amendment prohibits government retaliation against individuals or entities for speaking out.
This filing is significant as an early test case for whether federal national security procurement authorities can be used to coerce AI developers into removing safety restrictions on military and surveillance applications, potentially establishing limits on the government's ability to weaponize supply-chain exclusion powers against domestic technology companies that publicly advocate for AI guardrails.
Amicus: Foundation for Individual Rights and Expression
Issue: Whether the Pentagon's designation of Anthropic as a "supply chain risk" under 10 U.S.C. § 3252—imposed because Anthropic refused to remove safety guardrails from its Claude AI systems to enable fully autonomous weapons development and mass domestic surveillance—constitutes unconstitutional compelled speech, viewpoint-based retaliation, and coercion in violation of the First Amendment.
Anthropic filed suit against the U.S. Department of War in the Northern District of California, and moved for a temporary restraining order, preliminary injunction, or stay under APA § 705. Five civil liberties and technology-policy organizations (FIRE, EFF, Cato Institute, Chamber of Progress, and FALA) filed this amicus brief in support of that motion. Amici argue two independent First Amendment violations: first, that Anthropic's editorial choices embedded in Claude's design and usage policies constitute protected expressive conduct, such that the government's demand to alter those outputs amounts to compelled speech; and second, that the supply chain risk designation is facially retaliatory, as senior Pentagon officials publicly stated the sanction was intended to punish Anthropic's "ideology" and make room for more "patriotic" contractors. Amici further argue that permitting government-directed AI deployment for domestic surveillance raises independent First Amendment concerns, including chilling effects on the broader public.
This filing presents what appears to be the first judicial test of whether an AI developer's system-level safety design choices—training protocols, usage policies, and output restrictions—qualify as protected expressive conduct under the First Amendment, potentially extending the *Moody v. NetChoice* editorial-discretion framework to generative AI architecture. If the court credits the compelled-speech and retaliation theories at the TRO stage, it could meaningfully constrain the government's ability to use procurement and supply chain authorities as leverage to dictate AI safety standards.