The US Department of Justice filed a 40-page reply to Anthropic’s lawsuit on Tuesday, arguing that the AI startup’s refusal to sign a contract allowing “any lawful use” of its Claude models by the military is a commercial dispute—not a free speech issue—and that the Pentagon was well within its rights to cut the company off. The government called Anthropic an “unacceptable” and “substantial” national security risk, and asked the federal judge overseeing the case to deny the company’s bid for a preliminary injunction that would pause the supply chain risk designation while the litigation plays out.
Pentagon says Anthropic could sabotage its own AI during active combat operations
The government’s core argument goes beyond standard procurement logic. In the filing, DoJ lawyers said that because AI systems are “acutely vulnerable to manipulation,” giving Anthropic continued access to the Department of War’s warfighting infrastructure carried serious risks—specifically that company staff might “sabotage, maliciously introduce unwanted function, or otherwise subvert” its models during active operations if Anthropic felt its internal red lines were being crossed. The government also noted it “cannot simply flip a switch” right now, given that Claude is currently the only AI model cleared for use on the department’s classified systems and high-intensity combat operations are underway.That framing is a significant escalation. It repositions Anthropic not just as an inflexible vendor, but as a potential threat—a company whose ongoing ability to update and tune its own models makes it inherently untrustworthy in a wartime context.
Anthropic’s lawsuit argues Pentagon used ‘Supply Chain Risk’ label as ideological punishment
The backstory: Anthropic’s $200 million contract with the Pentagon collapsed after the two sides failed to agree on usage terms. Anthropic wanted explicit contractual guarantees that Claude would not be used for mass domestic surveillance or fully autonomous lethal weapons. The Pentagon countered that it was not a private company’s place to dictate how the military uses its tools, and demanded “all lawful use” access instead.When talks broke down, Defense Secretary Pete Hegseth designated Anthropic a supply chain risk—a label previously reserved for foreign adversaries—effectively barring the company from federal contracts. Anthropic sued on March 9, filing cases in both the Northern District of California and the DC Circuit Court of Appeals. It argued the designation was unconstitutional retaliation for its safety policies, and warned that more than 100 enterprise customers may walk away because of it, potentially costing the company billions.
A preliminary injunction hearing is set for March 24
The DoJ pushed back on those financial concerns too, calling Anthropic’s claimed losses “speculative” and arguing they could be addressed through standard contract remedies—not emergency court relief.The Pentagon also revealed in the filing that it is actively working to deploy AI from Google, OpenAI, and xAI as alternatives to Claude. Engineering work has already begun, according to the department’s chief digital and AI officer.Anthropic has until Friday to file its counter-response. The preliminary injunction hearing is scheduled for March 24 in federal court in San Francisco.

