OpenAI CEO Sam Altman has now pushed back against the criticism of his company’s deal with the US Department of War. Altman insisted that the agreement includes stronger safety guardrails than those Anthropic refused to accept before being backlisted. In a blog post published on Saturday (February 28), OpenAI shared excerpts of its contract language, highlighting clauses that explicitly prohibit the use of its AI models for mass domestic surveillance, fully autonomous weapons or high-stakes decision systems such as social credit scores.“We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s. We retain full discretion over our safety stack; we deploy via the cloud; cleared OpenAI personnel are in the loop; and we have strong contractual protections. This is all in addition to the strong existing protections in U.S. law,” the post read.
Anthropic’s refusal
OpenAI rival Anthropic was declared a supply chain risk by the Pentagon last week after the company reused to remove safeguard against those same use cases. The company vowed to challenge the designation in court, saying “no amount of intimidation or punishment from the Department of War will change our position.”
OpenAI’s positon
OpenAI CEO Sam Altman argued that OpenAI’s deal was partly designed to de-escalate between the government and AI labs. “A good future is going to require real and deep collaboration between the government and the AI labs,” the post said. OpenAI added that it had asked the Pentagon to make the same terms available to other labs, including Anthropic.The company also emphasised that it maintains full control over its safety stack and could terminate the contract if the government violated its terms. “We don’t expect that to happen,” OpenAI noted.
Public backlash received by OpenAI
The deal has sparked widespread criticism, with concerns over the ethical implications of military use of AI. On social media, many users, including celebrities like Katy Perry, have voiced support for Anthropic, with some even cancelling their ChatGPT subscriptions in protest.Meanwhile, Anthropic’s model Claude surged to second place in Apple’s App Store on Saturday, reflecting a wave of public sympathy after its clash with the Pentagon.

