The new year has been like a dream thus far for Nvidia, which suddenly finds itself the US’s third-largest company by market capitalization ($1.98 trillion), leapfrogging Google’s parent Alphabet ($1.72 trillion) and Amazon ($1.81 trillion).
The graphic processor unit (GPU) manufacture notched $60.9 billion in revenue during its 2024 fiscal year, a 126% increase over the previous year. The lion’s share came from Nvidia’s data center business, which generated $47.5 billion during the fiscal year, more than tripling last year’s total.
“The $1 trillion installed base of data center infrastructure is rapidly transitioning from general-purpose to accelerated computing,” Nvidia CFO Colette Kress said during the fourth-quarter earnings call.
Driving demand is the rapidly maturing field of artificial intelligence (AI), which hit an inflection point in 2023, says Chirag Dekate, vice president and analyst with Gartner’s Quantum Technologies, AI Infrastructures, and Supercomputing group. “What ChatGPT and Google’s Gemini did was to finally make what seemed to be esoteric with what seemed to be incredible. It made it indispensable.”
The result has been a vast widening of AI’s applicability. “We started the AI journey with hyperscale cloud providers and consumer internet companies,” said Nvidia CEO Jensen Huang during the earnings call. “Now every industry is on board, from automotive to healthcare to financial services, industrial to telecom, media and entertainment.” Huang expects the world’s installed data center base to double in the next five years, forging a market worth hundreds of billions of dollars.
However, the top cloud-computing providers have been busy developing their own accelerator processors designed for AI’s computing requirements. Google has its Tensor Processing Units (TPUs), Amazon Web Services has its Inferentia accelerator processor, and Microsoft has the Maia accelerator chip, all currently available in their respective cloud environments.
Non-cloud companies, such as Meta and Tesla, are also partnering with chip manufacturers to develop custom AI accelerators or are developing them internally. “I think the future is far more democratized and likely will have diverse options,” says Dekate. “GPUs would be part of the mix, but they will not be the only mix. Five years ago, that was not the case. Nvidia was everywhere. You’re now seeing the market shift and evolve towards a multi-vendor ecosystem.”