Wednesday, March 4


OpenAI is amending its Pentagon deal after critics said the original contract didn’t actually ban AI-powered mass surveillance of Americans. CEO Sam Altman added explicit language prohibiting domestic surveillance—including through commercially purchased data—and confirmed NSA use is off the table for now. He also said he’d “rather go to jail” than follow an unconstitutional order, and urged the DoD to offer Anthropic the same revised terms.

OpenAI is amending its freshly signed deal with the US Department of Defense after fierce public blowback over whether the agreement actually protected Americans from AI-powered surveillance. CEO Sam Altman confirmed the changes on Monday, saying the company had been working with the Pentagon to add clearer language—and offered one of his most personal statements yet on where he stands if push comes to shove.“If I received what I believed was an unconstitutional order, of course I would rather go to jail than follow it,” Altman wrote in an internal post that he later made public. The updated contract now explicitly states that OpenAI’s tools “shall not be intentionally used for domestic surveillance of US persons and nationals”—including through the purchase of commercially acquired personal data like location history or browsing records. The Pentagon also confirmed that OpenAI’s services will not be used by intelligence agencies such as the NSA, at least for now. Any future use by those agencies would require a separate contract modification.

‘Biggest Mistake Young People Make…’: OpenAI CEO Sam Altman Shares Blunt Take On AI At IIT Delhi

What the original deal actually said—and why critics weren’t buying it

The original agreement, announced last Friday, had already drawn significant scrutiny. According to reporting by The Verge, the deal didn’t actually prohibit mass surveillance—it simply required OpenAI to comply with existing laws, many of which have historically been stretched to cover sweeping domestic spying programs. Critics pointed out that the NSA’s PRISM program and other bulk data collection efforts had all operated under the same legal framework OpenAI was citing as a safeguard.OpenAI’s former head of policy research, Miles Brundage, put it bluntly on X: “OpenAI employees’ default assumption here should unfortunately be that OpenAI caved and framed it as not caving.”OpenAI pushed back, with a spokesperson telling The Verge that the system “cannot be used to collect or analyze Americans’ data in a bulk, open-ended, or generalized way.” But UC Berkeley researcher Sarah Shoker noted the vagueness of the language—words like “unconstrained” and “generalized”—left plenty of room for interpretation.

OpenAI struck a deal where Anthropic couldn’t—but the terms matter

The Pentagon deal came hours after the DoD declared Anthropic—OpenAI’s main rival—a “supply chain risk to national security,” a designation historically reserved for foreign adversaries. Anthropic had refused to drop two restrictions from its contract: no mass domestic surveillance and no fully autonomous weapons that can kill without a human in the loop.OpenAI agreed to the Pentagon’s core requirement of “all lawful use,” something Anthropic would not. The New York Times reported that Altman and DoD Chief Technology Officer Emil Michael had been in talks since Wednesday, and reached a framework within days—aided in part by the two men having a far better personal relationship than Michael had with Anthropic CEO Dario Amodei.A Trump administration undersecretary later confirmed that the OpenAI deal was “a compromise that Anthropic was offered, and rejected”—meaning Anthropic had seen similar terms and turned them down.

Altman says he pushed for Anthropic to get back in—and wants democratic oversight, not tech-company control

Despite the competitive optics of swooping in right after Anthropic’s deadline collapsed, Altman has been vocal that he does not want the Pentagon standoff to become a permanent fracture. He said in his internal post that he had told the DoD that Anthropic should not be designated a supply chain risk, and asked that the same amended terms be made available to all AI companies.“We do not want the ability to opine on a specific (and legal) military action,” Altman wrote separately. “But we do really want the ability to use our expertise to design a safe system.”He also acknowledged missteps. Rushing the Friday announcement, he said, “just looked opportunistic and sloppy”—a rare moment of self-criticism from a CEO who has spent the past year navigating some of the most politically charged deals in Silicon Valley history.



Source link

Share.
Leave A Reply

Exit mobile version