First Amendment

Anthropic PBC v. United States Department of War

🏛 Court of Appeals for the D.C. Circuit · 7 filings
2026-03-09 Other First Amendment AI Liability

Amicus: Foundation for Individual Rights and Expression

Issue: In *Anthropic PBC v. United States Department of War*, five civil liberties and technology amici argue that the Pentagon's supply chain risk designation of Anthropic PBC violated the First Amendment on three distinct theories: that the designation compelled Anthropic to alter Claude's design and usage policies (compelled speech), silenced Anthropic's published restrictions on surveillance and autonomous weapons use (compelled silence), and retaliated against Anthropic for protected public criticism of Pentagon demands. The case raises the genuinely unsettled question of whether an AI developer's training choices, published governance documents, and system-level usage policies constitute protected expression — and whether a national security procurement authority can be wielded against a company, at least in part, because of the ideological valence of its product.

Filed April 22, 2026, at the merits stage of a D.C. Circuit petition for judicial review under 41 U.S.C. § 4713, this consent amicus brief was submitted on behalf of the Foundation for Individual Rights and Expression, the Cato Institute, the First Amendment Lawyers Association, the Electronic Frontier Foundation, and the Chamber of Progress, all represented by Perkins Coie LLP. The brief argues that Claude's training, Anthropic's published "Claude's Constitution," and its Usage Policy collectively reflect protected editorial judgment analogous to curated media, invoking *Moody v. NetChoice* (2024) and *Hurley* to frame these as constitutionally shielded expressive choices. On retaliation, the amici point to Secretary Hegseth's public characterizations of Anthropic's "ideology" and presidential statements labeling the company "RADICAL" and "WOKE" as facially establishing but-for retaliatory motive under the *NRA v. Vullo* (2024) framework. The brief also contends that the D.C. Circuit's April 8 stay panel applied an erroneously narrow "actual chilling" standard, and urges the merits panel to apply the *Aref v. Lynch* "ordinary firmness" standard instead. Oral argument is scheduled for May 19, 2026.

This brief pushes the D.C. Circuit toward a significant and unresolved doctrinal question: whether the First Amendment protects not just a developer's written governance documents — which fit comfortably within existing editorial-judgment precedent — but also the design choices embedded in an AI system itself. The retaliation theory, grounded in publicly documented government hostility toward Anthropic's expressed values, is the brief's most legally orthodox argument and tracks the *Vullo* playbook closely enough to warrant serious merits attention. If the D.C. Circuit reaches the AI-expression question, whatever it says will carry substantial weight in future disputes over government leverage over AI developers' product decisions — a dynamic that extends well beyond the procurement context.

2026-03-09 Other First Amendment AI Liability

PETITIONER BRIEF [2169950] filed by Anthropic PBC… — Attachment 01208843394

Issue: In *Anthropic PBC v. U.S. Department of War*, No. 26-1049 (D.C. Cir.), Anthropic argues that the Secretary's invocation of 41 U.S.C. § 4713 to exclude the company from federal AI procurement was procedurally defective, factually unsupported, and constitutionally impermissible — presenting the question of whether a national-security supply-chain designation can survive APA and First Amendment scrutiny when it was issued before the statute's mandatory preconditions were satisfied and the Secretary's own public statements expressed hostility to the target company's speech and advocacy. The case is unusual because § 4713 was architected around covert foreign-linked threats (Kaspersky, Huawei), and Anthropic is a U.S. company holding Top Secret clearances that was engaged in active contract negotiations with the agency for six months before the designation issued.

Anthropic filed this opening brief on April 22, 2026, as petitioner in a D.C. Circuit proceeding seeking judicial review of a Department of War supply-chain-risk designation under 41 U.S.C. § 4713. The brief argues the designation is unlawful on four independent grounds: the Secretary issued it before receiving the joint recommendation that the statute requires as a precondition, invoking an emergency carve-out that Anthropic contends cannot apply after a half-year of negotiations; the designation misapplies a statute designed for covert foreign-sabotage threats to a transparent contractual dispute with a cleared domestic company; the agency's core security premise — that Anthropic could alter a deployed model — is factually false because a frozen, deployed model is technically inaccessible to its developer; and the Secretary's public condemnations of Anthropic's "ideology" and "virtue-signaling" constitute direct evidence of First Amendment retaliation under *NRA v. Vullo*, 602 U.S. 175 (2024). Anthropic seeks vacatur of the designation and all associated covered procurement actions, with oral argument scheduled for May 19, 2026.

This case tests whether courts will apply standard APA arbitrary-and-capricious review — including its requirement that agencies follow their own statutory sequence and engage with contrary factual evidence — to national-security procurement decisions that agencies have historically shielded from meaningful judicial scrutiny. The procedural-inversion argument, if accepted, would establish that even the § 4713 emergency carve-out has real limits when the record reflects self-induced urgency, a holding with broad implications for how agencies invoke national-security exigencies to bypass procedural requirements. The First Amendment retaliation theory is the brief's most novel and contested contribution: if the D.C. Circuit reaches it, the case could clarify whether *Vullo*'s government-coercion framework extends to procurement exclusions where agency officials have publicly disparaged a contractor's expressive advocacy, a question with significant consequences for AI companies whose public policy positions increasingly put them in tension with government clients.

2026-03-09 Other First Amendment AI Liability

Amicus: Taxpayers Protection Alliance Foundation

Issue: In *Anthropic PBC v. United States Department of War*, the Taxpayers Protection Alliance Foundation argues that a government-wide federal procurement blacklist issued against a domestic AI company violates the First Amendment when the triggering basis is the company's publicly stated limitations on how its products may be used. The case asks whether the "supply chain risk" designation authority under 41 U.S.C. § 4713 lawfully reaches a domestic contractor's product-use policies articulated during an ordinary contract dispute — or whether applying the statute in that way both misreads its text and penalizes protected speech. The non-obvious difficulty lies in the fact that Anthropic's stated policy positions and its product's functional capabilities may be inseparable, making it genuinely contested whether the government acted as a censor or simply as a contracting officer assessing performance.

The Taxpayers Protection Alliance Foundation filed this consent amicus curiae brief on April 22, 2026, in the U.S. Court of Appeals for the D.C. Circuit, supporting Petitioner Anthropic PBC's petition for judicial review of a Department of War agency designation under 41 U.S.C. § 4713. TPAF advances three independent grounds for vacating the designation. First, it argues that by extending the blacklist government-wide — reaching agencies including Treasury, State, HHS, and Energy far beyond the original DOW contract — Respondents penalized speech outside the scope of any legitimate program condition, in violation of the unconstitutional conditions doctrine under *AID v. Alliance for Open Society*. Second, it contends that § 4713's operative terms, read through *ejusdem generis* and the *Fischer v. United States* methodology of using surrounding statutory context to cabin broadly worded provisions, do not reach a domestic company asserting product limitations during a contract dispute. Third, TPAF raises a non-delegation challenge, arguing that if § 4713 permits designation on these facts, it supplies no intelligible limiting principle under *Gundy v. United States* and *FCC v. Consumers' Research*.

This brief is worth watching primarily because of its unconstitutional conditions framing: by grounding the First Amendment claim in the government-wide scope of the ban rather than the original contract dispute, TPAF gives the D.C. Circuit a doctrinal hook — rooted in *Alliance for Open Society* rather than the more government-favorable *Rust v. Sullivan* — that does not require the court to resolve whether an AI company's values statements and its product functionality are legally separable. That question is genuinely open: no court has squarely addressed whether a national-security procurement statute can support a cross-agency blacklist when the designated "risk" is a contractor's public advocacy about permissible uses of its own technology. The statutory misapplication argument, while creative, turns on whether courts will read § 4713's supply-chain-risk authority as limited to intentional adversarial actors — a reading the government can contest — making the First Amendment theory the stronger vehicle for Petitioner's relief.

2026-03-09 Other First Amendment AI Liability

Amicus: TechNet

Issue: Whether the Department of War's designation of Anthropic PBC as a supply-chain risk under 41 U.S.C. § 4713 (FASCSA), issued without the statute's required procedural safeguards and allegedly in retaliation for a contract dispute, violates both FASCSA's procedural mandates and Anthropic's First Amendment rights against compelled speech and viewpoint-based retaliation.

Following a contract dispute over Anthropic's Terms of Service, President Trump issued a social-media directive ordering all federal agencies to cease use of Anthropic's technology, and Secretary of War Hegseth simultaneously announced via social media that Anthropic was designated a supply-chain risk; six days later, the Pentagon issued formal determinations invoking 41 U.S.C. § 4713 and 10 U.S.C. § 3252. Anthropic petitioned the D.C. Circuit for review and moved for an emergency stay. Industry trade associations TechNet, SIIA, CCIA, and ITI filed this amicus brief in support of the stay, arguing that DoW bypassed FASCSA's required procedural steps—including joint recommendations, notice, opportunity to respond, written findings, and congressional notification—and that the designation constitutes both compelled speech under *Miami Herald Publishing Co. v. Tornillo* and viewpoint-based retaliation under *NRA v. Vullo*.

This case presents a potentially novel question of whether FASCSA's national-security supply-chain designation authority—previously applied only to foreign entities—can be used against a domestic AI contractor, and whether such use triggers First Amendment scrutiny as government-compelled alteration of an expressive AI product or retaliation for a company's negotiating position, which could significantly constrain executive procurement power over AI developers.

2026-03-09 Appellate Opinion First Amendment AI Liability

Amicus: Foundation for Individual Rights and Expression

Issue: Whether the Department of War's designation of Anthropic as a "supply chain risk" under 41 U.S.C. § 4713 — issued in retaliation for Anthropic's refusal to remove safety constraints from its Claude AI system that prohibited use for fully autonomous weapons and mass domestic surveillance — violates the First Amendment's prohibitions on compelled speech, viewpoint-based retaliation, and government coercion of private expressive choices.

Anthropic petitioned the D.C. Circuit for judicial review of the Department of War's § 4713 supply chain risk designation and filed an emergency motion for a stay pending review. Amici curiae — including FIRE, EFF, the Cato Institute, Chamber of Progress, and the First Amendment Lawyers Association — filed this brief in support of that stay motion, arguing two independent First Amendment violations: first, that Anthropic's human-authored usage policies and Claude's designed safeguards constitute protected expression under *Moody v. NetChoice*, such that the government's demand to alter those safeguards amounts to compelled speech; and second, that senior Pentagon officials' public statements demonstrate that the designation was transparently retaliatory against Anthropic's ideological dissent rather than grounded in any genuine security concern. Amici further argue that the use of AI for domestic surveillance raises independent First Amendment chilling-effect concerns warranting the stay.

This filing presents what may be the first appellate-level First Amendment challenge to government action coercing an AI developer to modify its model's content and safety constraints, directly testing whether an AI system's trained outputs and a developer's usage policies constitute protected speech and editorial judgment under *Moody v. NetChoice*; the court's resolution could establish whether and how the First Amendment limits the government's ability to condition procurement relationships on an AI company's willingness to remove safety guardrails.

2026-03-09 Other First Amendment AI Liability

Emergency Motion to stay underlying order — Attachment 2

Issue: Whether the Department of War's designation of Anthropic PBC as a supply chain risk and resulting covered procurement action under 41 U.S.C. § 4713 of the Federal Acquisition Supply Chain Security Act of 2018 is unlawful as arbitrary, capricious, contrary to constitutional right, or otherwise not in accordance with law.

Anthropic PBC filed a petition for judicial review in the D.C. Circuit pursuant to 41 U.S.C. § 1327(b), challenging covered procurement actions taken by Department of War Secretary Peter B. Hegseth under 41 U.S.C. § 4713. The addendum filed on March 11, 2026 includes the statutory provisions relied upon, multiple supporting declarations from Anthropic personnel and counsel, and exhibits consisting of the Department of War's § 4713 notice to Anthropic, internal Pentagon memoranda, public statements by Secretary Hegseth and President Trump, and media reports concerning the Pentagon's threatened and actual actions against Anthropic. The court is asked to review the covered procurement action under the standards set forth in § 1327(b)(2), which permits the court to hold such actions unlawful if found arbitrary, capricious, contrary to constitutional right, in excess of statutory authority, lacking substantial support in the administrative record, or not in accord with required procedures.

This petition presents a rare test of the judicial review mechanism established by FASCSА for supply chain exclusion actions targeting an AI developer, potentially establishing how constitutional claims — including First Amendment challenges — may be raised against national security-justified procurement exclusions of AI companies under § 4713's otherwise heavily restricted review framework.

2026-03-09 Appellate Opinion First Amendment AI Liability

Emergency Motion to stay underlying order

Issue: Whether the Department of War's invocation of 41 U.S.C. § 4713 to designate Anthropic a "supply-chain risk" — without prior notice of supporting evidence, a 30-day opportunity to respond, or a written determination of statutory predicates — violates the procedural and substantive requirements of the Federal Acquisition Supply Chain Security Act, the Due Process Clause of the Fifth Amendment, and the First Amendment's prohibition on government retaliation for protected speech and petitioning activity.

Anthropic filed an emergency motion for stay pending review in the D.C. Circuit on March 11, 2026, seeking relief by March 26, 2026, after the Department of War issued a § 4713 Notice on March 4, 2026, formally designating Anthropic a supply-chain risk and national security threat following Anthropic's public refusal to remove usage-policy restrictions on lethal autonomous warfare and mass surveillance of Americans. Anthropic argues the Secretary acted contrary to law and arbitrarily by bypassing § 4713's mandatory sequencing — obtaining a risk assessment, disclosing the evidentiary basis to the targeted source, and affording 30 days to respond before issuing a written determination — and that the Department never invoked the statute's narrow urgent-national-security exception to justify immediate effect. Anthropic further argues the Secretary's own public statements, characterizing Anthropic's refusal as "sanctimonious rhetoric," "betrayal," and "corporate virtue-signaling," establish that the designation was retaliatory for protected expressive and petitioning activity under *National Rifle Ass'n of America v. Vullo*, 602 U.S. 175 (2024), and *Media Matters for America v. Paxton*, 138 F.4th 563 (D.C. Cir. 2025). Anthropic applied the standard four-factor stay test under *Nken v. Holder*, 556 U.S. 418 (2009), contending that all factors — likelihood of success on the merits, irreparable harm, balance of equities, and public interest — favor a stay.

This motion presents what appears to be the first judicial challenge to a § 4713 supply-chain-risk designation issued against an American AI developer, and potentially the first such designation against any domestic company, raising novel questions about the statute's procedural floors and whether the government may weaponize national-security procurement authority to coerce AI developers into removing safety guardrails on their models. If the D.C. Circuit reaches the First Amendment retaliation claim, its ruling could significantly extend *Vullo*'s coercion doctrine into the AI-regulation context, constraining the government's ability to use contracting and debarment powers as leverage against companies that publicly resist demands to alter AI safety policies.

Related Commentary