OpenAI CEO Sam Altman late Friday said his company has struck a deal with the Pentagon to deploy its AI models on classified networks — just hours after President Donald Trump ordered every federal agency to immediately stop using rival Anthropic’s technology and Defence Secretary Pete Hegseth labelled the company a “supply-chain risk to national security.”Altman, in a post on X, said the agreement includes the same two safety guardrails that Anthropic had been fighting the Pentagon over for weeks—prohibitions on mass domestic surveillance and a requirement for human oversight over the use of force, including autonomous weapons.“The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” Altman wrote, using the Trump administration’s preferred name for the Department of Defense.In a pointed message aimed at Anthropic, Altman added that OpenAI is asking the Pentagon to offer the same terms to every AI company—suggesting that Anthropic’s months-long standoff with the military was, in his view, unnecessary.
Anthropic blacklisted after refusing Pentagon’s ‘all lawful uses’ demand
The deal landed at the end of one of the most dramatic weeks in AI industry history. Anthropic, the maker of Claude, had been locked in tense negotiations with the Pentagon over a $200 million contract. The military wanted unrestricted access to Claude for all lawful purposes. Anthropic CEO Dario Amodei refused, insisting the company needed explicit assurances its AI wouldn’t be used for mass surveillance of Americans or fully autonomous weapons.When talks collapsed past a Friday evening deadline, Hegseth moved swiftly. He designated Anthropic a supply-chain risk — a label historically reserved for foreign adversaries like Chinese tech firms, never before publicly slapped on an American company. Trump piled on with a Truth Social post calling Anthropic a “RADICAL LEFT, WOKE COMPANY” and threatening “major civil and criminal consequences” if it didn’t cooperate during a six-month phase-out period.
Sam Altman got the same red lines—but negotiated differently
The key difference between the two deals comes down to approach. Anthropic wanted contractual language explicitly barring surveillance and autonomous weapons. Altman, who started talks with the Pentagon on Wednesday, agreed to the “all lawful uses” framework the military wanted—but negotiated the right to bake technical safeguards directly into OpenAI’s models.These include keeping models confined to cloud environments rather than edge deployments (which would be needed for autonomous weapons), deploying OpenAI personnel with security clearances alongside military teams, and building monitoring systems the company can continuously strengthen.
Silicon Valley rallied behind Anthropic, but OpenAI may walk away the winner
Despite broad support for Anthropic’s stance — dozens of OpenAI and Google employees signed an open letter backing its position — the outcome could end up being a major business win for OpenAI. The company announced a $110 billion funding round the same day, with Amazon investing $50 billion. That Amazon partnership is particularly significant because the Pentagon accesses classified systems through Amazon’s cloud infrastructure—something OpenAI previously lacked.Anthropic, for its part, isn’t backing down. The company said it will challenge the supply-chain designation in court and that “no amount of intimidation or punishment” will change its position.The Pentagon has not yet explained why it accepted safety guardrails from OpenAI while rejecting similar ones from Anthropic.
