Palmer Luckey, the founder of defense technology company Anduril has now shared his thoughts on the question of who should control the use of artificial intelligence. According to a report by Fortune, Luckey made his stance clear and said that the governments, not corporations should decide how AI is deployed in national security. In a recent, interview with the New York Post, Luckey argued that allowing the tech executives to determine who they sell to risks undermining democracy. “We need to stick to a position that this is in the hands of the people,” he said. “Anyone who says that a defense company should be going beyond the law, beyond what legislators and elected leaders say in terms of who they’ll work with and not, you are effectively saying you do not believe in this democratic experiment, that you want a ‘corporatocracy.’”Luckey further added, “In all cases, whoever the United States government tells me that I can and cannot sell to — to have any other position is to fall further into … basically corporate executives having de facto control over US foreign policy.”
Palmer Luckey’s comments come amid Anthropic’s clash with the Pentagon
The comments made by Luckey comes during the escalating tensions between AI giant Anthropic and the US Department of Defense. Recently, Anthropic CEO Dario Amodei refused to allow Pentagon unrestricted use of its AI systems for mass surveillance or fully autonomous weapons, prompting the agency to label the company a “supply-chain risk.”The designation of “supply-chain risk” is mainly reserved for foreign adversarial firms such as Huawei, has sparked controversy. Amodei said Anthropic would challenge the move in court, insisting that the Pentagon’s requests crossed ethical lines. “We cannot in good conscience accede to their request,” Amodei said in a press release.The comments made by Luckey largely highlight a growing divide in Silicon Valley: whether tech companies should retain the right to refuse certain government contracts based on ethical concerns, or whether elected officials alone should make those determinations. For Luckey, the answer is straightforward AI decisions tied to national defense must remain in the hands of the government and, by extension, the people.
Anthropic vs Pentagon
The dispute between Anthropic AI and Pentagon originates from the AI company refusing to lift its safeguard and let the military use it for “all lawful purposes.” Despite being the only AI model running in the military’s classified systems, the AI firm has consistently insisted on blocking Claude’s use for what it calls the mass surveillance of Americans or to develop weapons that fire without human involvement.For those unaware, Anthropic’s Claude was used during the operation to capture Venezuela’s Nicolás Maduro, through Anthropic’s partnership with Palantir. According to reports, the company’s AI tool was also used during the Iran strikes.The Pentagon gave an ultimatum to the company last week before announcing it a ‘supply chain risk’. Calling the decision “retaliatory” and “punitive”, Anthropic CEO said that US President Trump disliked the company for not giving ‘dictator-style praise’. Recently, the company said that it is in talks with U.S. Department of Defense about the use of its AI models by the US military.
