In my last column, I argued that the server farm is the new oil refinery, a kinetic target requiring concrete walls, anti-drone domes, and deep integration into national air defence grids. The logic was simple: if your data lives in a physical building, that building can be bombed. The solution seemed equally simple: fortify it.
I was wrong to call it simple.
The moment a private data centre accepts military protection, the moment a soldier stands guard over Instagram photos and Aadhaar records, a far more dangerous equation is triggered. You don’t just gain a shield. You acquire a landlord. And landlords, especially those in uniform, tend to want keys.
This is the Bunker Paradox: the very infrastructure hardening that protects our data from kinetic attack may simultaneously expose it to a subtler, more permanent threat, absorption into the national security state.
The Dual-Use Target: What the Geneva Convention Doesn’t Know About Cloud Computing
International humanitarian law was built for a world of clear distinctions. A military airbase is a legitimate target. A civilian hospital is not. The rules are binary, and the architects of the Geneva Conventions had the luxury of designing them that way because the assets of war and peace rarely shared the same roof.
That luxury is gone.
A single hyperscale facility today may simultaneously host a teenager’s Google Drive, a bank’s transaction ledger, a hospital’s patient records, and, through a government cloud contract, military logistics software. The building is one structure. The data is indivisible, at least physically. A decapitation strike on the facility doesn’t surgically extract the defence ministry’s files; it incinerates everything in the same rack.
Legal scholars have begun to confront this directly. A recent Yale Law Journal analysis found that labelling objects ‘dual-use’ has had the paradoxical effect of creating a ‘porous category of targetable objects that are obviously critical to civilian life’, one that reduces targeting inhibitions rather than strengthening them. The Lieber Institute at West Point takes a more technical view, arguing that where an attacker can surgically strike a military section of a shared building, only that portion is a lawful target, but this assumes precision that loitering munitions launched at coordinates do not always provide.
“If a building houses both Instagram photos and military logistics, the Geneva Convention has no meaningful answer to offer.”
The ICRC’s own guidance on cyber operations acknowledges this directly: because civilian and military networks frequently share the same physical infrastructure — cables, satellites, routers — ‘maintaining the principle of distinction may be more difficult in cyberspace than in conventional warfare.’ Apply that logic to the physical layer, and you have a structural trap. India’s own data localisation drive, noble in its sovereignty ambitions, is constructing exactly this kind of concentrated vulnerability.
According to the Observer Research Foundation analysis, India currently generates nearly 20 percent of global data but accounts for only around 3 percent of the world’s data centre capacity. The hyperscale campuses rising in Mumbai, Chennai, and Hyderabad are designed to close that gap, but in doing so, they are creating strategic magnets. Fortifying them with national air defence integration does not resolve the dual-use problem. It confirms it.
The Keymaster Problem: Physical Uptime and the Price of Protection
Here is the transaction that no policy brief is willing to state plainly: when a state provides physical security for a private data centre, it implicitly acquires a form of leverage that money cannot easily repurchase.
Military protection is not a commercial contract like a hired security firm. It comes embedded with a doctrine of threat assessment, internal monitoring, and inevitably, the logic of insider threat detection. If the army deploys anti-drone domes over a facility and integrates it into the national air defence grid, can it really be expected to remain indifferent to what runs inside? The military’s mandate is not just to stop a missile from the outside. It is to ensure the facility cannot be compromised from within.
This creates what I call the Keymaster Problem: the entity providing physical ‘uptime’ will, sooner or later, request digital access in the name of sabotage prevention. The ask will be framed as a narrow, technical necessity — an audit log here, a network monitoring feed there. But the logic is inexorable. You cannot defend a box you cannot see inside of, or so the argument will go.
The history of signals intelligence is unambiguous on this point. The NSA’s PRISM programme, revealed by Edward Snowden in 2013, began with narrow statutory authority under FISA Section 702 and expanded into bulk collection from the servers of Google, Apple, Facebook, and Microsoft. In January 2025, a US federal court ruled that the FBI’s warrantless ‘backdoor searches’ of communications gathered under Section 702 were unconstitutional, twelve years after the programme launched. The infrastructure existed; the mission creep followed as a matter of institutional physics.
“A backdoor requested for security monitoring is architecturally indistinguishable from a backdoor requested for surveillance.”
The most vivid recent demonstration of this dynamic came not from an adversarial regime but from a democratic ally. In January 2025, the UK Home Office secretly issued a Technical Capability Notice to Apple under the Investigatory Powers Act — demanding blanket access to encrypted iCloud backups of Apple users worldwide. Not just British citizens.
Worldwide. Amnesty International characterised it as ‘an alarming overreach’. Apple, rather than comply, withdrew its Advanced Data Protection feature from the UK market entirely. The UK government eventually abandoned the first order in August 2025, only for the Home Office to file a second, narrower order the following month.
The UK-Apple saga is not an aberration. It is a preview. The legal powers used the Investigatory Powers Act, FISA Section 702, and the US CLOUD Act. This all began as narrow, legitimate instruments. The CLOUD Act alone empowers US authorities to compel American cloud providers to produce data ‘in their possession, custody, or control,’ regardless of where that data is physically stored. In a world where ‘Sovereign AI’ demands local data residency, this creates a situation where a data centre sits on Indian soil, but its cloud operating layer remains subject to American or other foreign jurisdiction.
The Solution That Actually Exists: Confidential Computing and the Hardware Firewall
The good news, and there is genuine good news here, is that this is not a problem without a technical resolution. The field of confidential computing offers a path out of the Keymaster Problem, grounded not in legal promises or policy frameworks, but in mathematics.
Most encryption protects data at rest (stored on disk) and in transit (moving across networks). Confidential computing solves a harder problem: protecting data in use while it is actively being processed in memory, at the exact moment when it is most vulnerable to a privileged insider with system-level access. The Confidential Computing Consortium defines it as ‘the protection of data in use by performing computations in a hardware-based, attested trusted execution environment (TEE)’.
The mechanism is a hardware-enforced Trusted Execution Environment — effectively a sealed vault within the processor itself. Intel’s TDX, AMD’s SEV-SNP, and ARM’s CCA each implement versions of this architecture. The data is decrypted and processed inside the TEE, and the TEE’s integrity is cryptographically attested before execution begins. A system administrator with root access, a military network monitor with full packet capture, even the cloud provider’s own engineers — none of them can read the plaintext.
This is not theoretical. In 2024, the US Department of Defence sanctioned the deployment of confidential computing-powered cloud infrastructure for protected mission-critical operations across several federal departments. The DoD’s own adoption demonstrates that the technology can satisfy military security requirements without requiring content-level access by the guardian entity. A July 2024 policy paper from the Future of Privacy Forum confirmed that confidential computing architectures allow organisations to ‘retain control over their data governance and privacy’ even when that data is processed within shared military-grade infrastructure.
“Confidential computing allows the military to guard the box without ever seeing the bits. That is not a metaphor. It is a hardware guarantee.”
IDC research from November 2025 found that 45 percent of organisations operating in hybrid or multi-cloud environments had adopted confidential computing, partly driven by regulatory mandates like the EU’s Digital Operational Resilience Act (DORA), which explicitly names ‘data in use’ as a protection requirement. In India, the CERT-In 2022 guidelines on incident reporting and the RBI’s 2018 payment data localisation directive are already anchoring demand for exactly this type of architecture.
The challenge is political will, not technical feasibility. Confidential computing at hyperscale requires buy-in from cloud providers, hardware vendors, and regulators simultaneously. It requires that governments accept a form of protection that comes with a hard ceiling on their own access. The UK-Apple episode suggests that is a ceiling some governments will spend considerable political capital trying to punch through.
Sovereign AI and the Infrastructure of Private Life
The deepest layer of the Bunker Paradox is not military at all, instead philosophical. ‘Sovereign AI’ has become the mantra of every major government from New Delhi to Paris to Riyadh. For India, the argument is coherent and, in its basic form, defensible. EY’s 2026 analysis frames it plainly: ‘Sovereign AI refers to a nation’s ability to design, develop and regulate AI systems using domestic infrastructure, national data and an indigenous workforce.’ The Observer Research Foundation goes further, comparing AI infrastructure to energy grids and telecommunications networks in strategic importance.
India’s ambitions are not rhetorical. The IndiaAI Mission represents over Rs 10,000 crore in investment across AI infrastructure, workforce development, and ethical frameworks. The G42-Cerebras partnership, announced in early 2026, deploys 8 exaflops of sovereign compute capacity in India. And yet, as Business Standard reported, India’s data centre ecosystem remains only ‘partially sovereign’. Domestic-controlled capacity sits at roughly 40-45 percent, with foreign-controlled infrastructure accounting for the rest, and effective compute control, particularly through hyperscaler cloud platforms, remaining predominantly external.
This gap is precisely where the Bunker Paradox becomes most acute. When India fortifies a data centre that runs on AWS or Azure infrastructure, which sovereign, exactly is being served? As the Microsoft France general manager testified under oath before the French Senate in 2025, he cannot guarantee that French citizen data is safe from access by US authorities. Representatives from Google, Amazon, and Salesforce all confirmed the same: they would hand over European citizen data to US authorities if required by court order.
“Sovereign AI is not established through flagship models alone. It requires sustained national control over end-to-end infrastructure — including the stack that a foreign army might be hired to protect.”
Global Data Centre Hub’s quarterly analysis captured the inflection point starkly: by Q3 2025, ‘data centers had transitioned from private utilities to instruments of national power.’ Sovereign funds, hyperscalers, and governments had fused into a single deployment apparatus. This is not inherently dangerous. But it is consequential.
The rhetorical drift from ‘Sovereign AI’ to ‘State AI’ — from national independence to national ownership — is where the threat to privacy lives. And the bunkerisation of civilian data centres, integrated into military grids, accelerates that drift. The endpoint of that logic is not security. It is ownership. Not the temporary custody of a landlord, but the permanent entitlement of a sovereign.
The Bottom Line
We need the bunkers. That argument stands. Physical infrastructure is now a kinetic target, and the nation that leaves its hyperscale data centres undefended is leaving its economy exposed to a form of attack that no firewall can counter.
But we must be precise about what we are building. A fortified data centre integrated into national air defence is not a neutral upgrade. It is a political act that transforms a private commercial facility into an instrument of national power with all the access implications that entail.
The Bunker Paradox can be managed, but only if we are honest about it. That means mandating confidential computing architectures as a condition of military protection, not as an optional feature. It means writing the legal separation between physical guardianship and digital access into the defence contracts before the concrete is poured, not after. And it means resisting the rhetorical drift from ‘Sovereign AI’ to ‘State AI’ — recognising that the infrastructure of our private lives can be defended without being owned.
The UK’s repeated attempts to extract a master key from Apple’s encrypted cloud show what happens when that boundary is treated as negotiable. The DoD’s own adoption of confidential computing shows it doesn’t have to be. The thickness of server room walls is now a measure of national strength. But what those walls contain must remain ours.
(The author is studying Computer Science and Artificial Intelligence at Rutgers University, New Brunswick, USA)
