Artificial Intelligence (AI) company Anthropic and the Trump White House had a public rupture last week, unusual for American industry. The company refused to permit the use of its technology for mass surveillance or autonomous weapons. The administration designated it a national security threat, placing it in the same bracket as China’s Huawei and Russia-linked Kaspersky. But within the day, the US government used Anthropic’s tools for its strikes in Iran. Details of the deployment remain unclear, but a similar deployment earlier in the year helped the US military select targets and run battlefield simulations when it captured Venezuela’s president Nicolás Maduro.
The question is not whether to engage with AI — the technology is too consequential to ignore. It is whether countries arrive at that engagement with a clear understanding about what they are trading away (Reuters)
The confrontation has been read as a tech industry spat — from an American or a Western vantage point. For the rest of the world, it carries three signals that demand attention.
The first is what this technology already does in war. Take target selection. It sits at the centre of international humanitarian law — the principles of distinction and proportionality governing who may be killed and under what conditions. Those frameworks were built around human decision-makers. Then there is intelligence analysis — a domain where AI can compress the time between raw intelligence and operational decision. For those who possess it, war has already changed — faster in its decisions and murkier in its accountabilities.
The second is where the capability lies. Meaningful frontier AI is held by, perhaps, a dozen organisations, almost all American or Chinese. This is not an arms race in any conventional sense — most nations are not competitors, they are outside the frame. The distance between those who possess this technology and those who depend on it is already wide, and widening.
Which brings up the third point: What happens when countries that dominate AI weaponise it against countries that depend on AI as infrastructure? Countries that have woven American-origin AI into education, health infrastructure, and state administration have acquired, without full reckoning, a new category of strategic exposure.
Buried beyond the headlines of the Anthropic-White House clash is a reckoning for every government that has, without full deliberation, made a foreign technology a part of its infrastructure. India is among them. The question is not whether to engage with AI — the technology is too consequential to ignore. It is whether countries arrive at that engagement with a clear understanding about what they are trading away. This week suggested the window for that thinking may be shorter than assumed.