In the wake of Wednesday’s announcement that Nvidia’s earnings have significantly outperformed expectations, Reuters has disclosed that Nvidia’s CEO, Jensen Huang, foresees the enduring growth of the AI industry extending well into the next year. As an affirmation of this perspective, Nvidia has committed to repurchasing $25 billion worth of shares, a value now triple what it was before the surge of generative AI enthusiasm.
In a press release showcasing Nvidia’s financial achievements, including a remarkable quarterly revenue of $13.51 billion—marking a 101 percent increase from the prior year and an 88 percent surge from the previous quarter—Huang declared with enthusiasm, “A new era of computing has dawned.” He went on to note that businesses worldwide are shifting from conventional computing to accelerated computing and generative AI.
For those just tuning in, Reuters describes Nvidia as having a “near monopoly” on hardware that accelerates the training and deployment of neural networks, which are the driving force behind contemporary generative AI models. The company commands a substantial 60-70 percent share of the AI server market. Notably, its data center GPU lines excel in conducting the billions of matrix multiplications crucial for running neural networks due to their parallel architecture. What began as graphics accelerators for video games now power the generative AI boom.
Among Nvidia’s most popular AI hardware offerings are the A100 and H100 data center GPUs. Moreover, Nvidia has introduced the GH200 “Grace Hopper” chipset, a combination of the H100 and a CPU, which fuels Nvidia’s range of computer systems. These are not your typical consumer-grade gaming GPUs like the GeForce RTX 4090; The Verge reports that the H100 chip retails for around $40,000 and boasts the capability to execute a significantly higher volume of calculations per second.
The demand for GPUs in AI applications is immense, with Nvidia’s second-quarter data center revenue reaching an impressive $10.32 billion, dwarfing its consumer gaming revenue of $2.49 billion. In March, reports indicated that OpenAI’s widely-used AI assistant, ChatGPT, was anticipated to harness as many as 30,000 Nvidia GPUs for its operations, although precise figures from the company remain undisclosed. Microsoft is also leveraging data centers equipped with “tens of thousands” of GPUs to power its implementations of OpenAI’s technology, which it is currently integrating into Microsoft Office and Windows 11.
As Huang succinctly puts it, “This is not a one-quarter thing.” The AI surge appears set to continue its trajectory well into the foreseeable future.
Nvidia’s dominant position in the market has left competitors like AMD scrambling to catch up. Currently, Nvidia’s lead appears almost insurmountable, as evidenced by its historic achievement in May when it became the first-ever $1 trillion chip company.
While Huang’s decision to repurchase stock at a time when prices are at their highest carries inherent risk, it underscores his unwavering confidence in Nvidia’s sustained success. The strong demand for Nvidia’s chips has provided the financial means to execute this strategy, as demonstrated by the company’s impressive second-quarter performance. Notably, their adjusted gross margins, a key financial metric measuring profitability after accounting for the cost of goods sold, surged to 71.2 percent. This figure significantly outpaces the typical gross margins of semiconductor companies, which usually fall between 50 and 60 percent, as highlighted by Reuters.
In an interview with Reuters, Huang identified two pivotal factors propelling Nvidia’s current triumph: the increasing shift from data centers centered around CPUs to those anchored by Nvidia’s graphics processing units (GPUs), and the growing utilisation of generative AI systems like ChatGPT.
“These two fundamental trends are driving all that we’re witnessing, and we’re about a quarter into this transformation,” he explained to Reuters. “While it’s challenging to predict how many more quarters lie ahead, this fundamental shift isn’t ephemeral; it’s a long-term evolution.”
However, as with any burgeoning industry, there’s the specter of a potential downturn. Every boom ultimately faces the risk of a bust, and Nvidia may not be immune. Reuters reported that some analysts doubt the limitless demand for Nvidia’s GPU chips. Dylan Patel from SemiAnalysis, quoted by the news agency, suggested that many tech companies are riding the wave of AI hype, purchasing Nvidia GPUs speculatively without a clear plan to monetize generative AI. This behavior could be likened to a billion-dollar case of FOMO (Fear of Missing Out).
“They must overinvest in GPUs or risk missing the boat,” warned Patel. “At some point, the genuine use cases will become apparent, and many players may curtail their investments, although others will likely continue to accelerate their commitment.”
Another potential hurdle on the horizon is product shortages. Reuters indicated that Huang regards securing the necessary supplies for producing its expensive server hardware as Nvidia’s most significant risk. The company’s most significant sales success in the current quarter is the HGX system, a supercomputer built around its H100 GPUs, which requires sourcing numerous individual components.
“We’re receiving substantial support from our supply chain,” Huang assured Reuters in an interview. “Yet, it’s an intricate supply chain. People might assume it’s just a GPU chip, but it’s an exceedingly complex GPU system. It weighs 70 pounds, comprises 35,000 components, and costs $200,000.”
Furthermore, obtaining the H100 chips themselves has become increasingly challenging. Presently, the demand for high-powered GPUs far exceeds the supply, potentially posing a bottleneck to the pace of AI innovation. Nevertheless, this scarcity may also stimulate the development of innovative techniques to maximize the utility of available GPU power.