Less than six months after announcing a $100 billion commitment to OpenAI, Nvidia seems to have walked back that pledge. OpenAI has announce a new $110 billion funding round that values the company at $840 billion or $730 billion on a pre-money basis. The round includes $30 billion from SoftBank, $50 billion from Amazon, and $30 billion from Nvidia – which is considerably lower than the the $100 billion announced in September.“Today we’re announcing $110B in new investment at a $730B pre-money valuation. This includes $30B from SoftBank, $30B from NVIDIA, and $50B from Amazon. We’ve also signed a strategic partnership with Amazon and secured next generation inference compute with NVIDIA. Additional financial investors are expected to join as the round progresses,” OpenAI announced.Last year, Nvidia and OpenAI announced a plan under which Nvidia would invest up to $100 billion in OpenAI and build 10 gigawatts of computing infrastructure for the AI company – seen as a major vote of confidence in OpenAI from one of the most powerful companies in the AI hardware space.
What actually happened
According to a report in the Wall Street Journal earlier this year, Nvidia CEO Jensen Huang told industry associates that the original $100 billion agreement was non-binding and was never finalised. In other words, the headline number that made news last year was never a firm commitment. Reports also claimed that Huang expressed reservations about OpenAI, criticising what he described as a lack of discipline in the company’s business approach.
The new deal: smaller but still significant
While Nvidia is still writing a cheque, it is $30 billion — not $100 billion. The infrastructure commitment has also been reconfigured. As part of the new arrangement, OpenAI will gain access to 3 gigawatts of dedicated inference capacity and 2 gigawatts of training capacity on Nvidia’s next-generation Vera Rubin systems, building on existing Hopper and Blackwell systems already running across Microsoft, Oracle Cloud Infrastructure, and CoreWeave.“We are also expanding our long standing collaboration with NVIDIA, including the use of 3GW of dedicated inference capacity and 2 GW of training on Vera Rubin systems. This builds on Hopper and Blackwell systems already in operation across Microsoft, OCI, and CoreWeave. Together, this capital and infrastructure expansion strengthens our ability to train and deploy frontier models at global scale,” the ChatGPT-maker said.Meanwhile, Amazon has stepped in as a major player and is committing $50 billion in total, starting with an initial $15 billion followed by a further $35 billion once certain conditions are met. Alongside the investment, OpenAI and Amazon have struck a separate commercial deal under which OpenAI will use 2 gigawatts of computing capacity powered by Amazon’s in-house Trainium chips.
