Demis Hassabis had spent months shuttling between London and Mountain View—11-hour flights, jet lag, endless legal negotiations—trying to wrangle DeepMind into a semi-independent Alphabet company with its own oversight board. Then, on January 9, 2017, he sat down with Sundar Pichai at Google‘s headquarters, Suleyman dialled in from Asilomar, and the whole thing unravelled.Pichai sounded warm, open, amenable. There were just a few details left to iron out, he said. Resolve them with David Drummond. The next day, Drummond showed up and announced that Hassabis had misread the room entirely. Pichai was against Alphabetisation. Full stop.
AI was too valuable to Google for Pichai to let it walk out the door
The crux of Pichai’s objection, as laid out in Sebastian Mallaby’s new book The Infinity Machine, was strategic. The Alphabet “bet” structure—the one that housed projects like autonomous cars and life-extension science—was designed for moonshots with no connection to Google’s core business. Artificial intelligence was a different animal entirely.AI was destined to be central to Google Search, Google Cloud, and virtually everything else the company touched. Spinning DeepMind out, even partially, would mean handing away the engine. That wasn’t something Pichai was willing to do.Hassabis and Suleyman had assumed Larry Page’s support would carry the day. It didn’t. Page showed up two hours late to one of the decisive meetings. Brin was even later. With the founders effectively checked out, Pichai was the one making the calls.
DeepMind had a $1 billion walk-away plan. It never got used.
The negotiations had started in late 2015 as a genuine attempt to build something new: a so-called 3-3-3 governance board, with three DeepMind representatives, three from Alphabet, and three independents. Hassabis and Suleyman had even assembled a walk-away plan—$5 billion in outside capital, a Reid Hoffman commitment of $1 billion, a legal structure they called a “global interest company.” None of it landed.By mid-2017, Suleyman was announcing a DeepMind spin-out to the entire company at an off-site in the Scottish Highlands. Weeks later, Google sent back the proposal covered in red lines.
After years of failed negotiations, Hassabis changed his entire theory of AI safety
By the time Hassabis reflected on the saga—now as CEO of the enlarged Google DeepMind, having absorbed Google Brain and related teams—his conclusion was blunt. Governance structures, he said, don’t work. Not really. Independent boards pursue their own agendas. Safety charters draw lines in the wrong places.“Safety isn’t about governance structures,” Hassabis said. The better play, he decided, was earning real trust inside Google—being at the table when decisions got made, rather than negotiating from outside it.He called it moving from idealist to realist, hopefully with values intact.


