Anthropic PBC v. U.S. Department of War
Issue
Whether the U.S. Department of War may compel Anthropic PBC to strip its AI model Claude of usage-policy restrictions—specifically prohibitions on mass surveillance of Americans and lethal autonomous warfare—as a condition of continued government contracting, implicating First Amendment and compelled-speech doctrine as applied to an AI developer's editorial control over its model's permitted uses.
What Happened
Anthropic filed suit against the U.S. Department of War in the Northern District of California, and this document is a supporting declaration by Anthropic co-founder and Chief Science Officer Jared Kaplan filed in connection with what appears to be a motion for preliminary relief. Kaplan attests that the Department demanded Anthropic remove its Usage Policy across all existing and future offerings—permitting "all lawful uses" by DoW and its contractors—and delivered an ultimatum that refusal would result in loss of all current and future Department business. Anthropic agreed to shift from a "whitelist" to a "blacklist" approach but refused to eliminate two specific prohibitions: mass surveillance of Americans and lethal autonomous warfare, which Kaplan describes as safety-critical limits grounded in Anthropic's technical judgment about Claude's current capabilities and the inadequacy of existing legal frameworks to address AI-enabled surveillance at scale.
Why It Matters
This case presents a novel question of whether the government can use its contracting power to coerce an AI developer into removing self-imposed safety restrictions on a deployed model, potentially setting precedent on both unconstitutional conditions doctrine as applied to AI policy restrictions and the extent to which an AI company's usage policies constitute protected editorial or expressive conduct under the First Amendment.
Related Filings
Other proceedings in the same litigation tracked by this monitor.
How accurate was this summary?