Nvidia Q2 Earnings Call -1, Good News
- The second quarter was another record quarter. Revenue of $30 billion was up 15% in a row, up 122% year-over-year, and well above our forecast of $28 billion.
- Data Center – Data center sales of $26.3 billion rose 16% in a row, a record 154 percent year-over-year increase, driven by strong demand for NVIDIA Hopper, GPU computing, and networking platforms. Computing sales increased more than 2.5 times. Networking sales more than doubled compared to last year. Cloud service providers accounted for about 45% of data center sales, with more than 50% coming from consumer internet and corporate companies.
- Customers continue to accelerate their Hopper architecture purchases as they prepare to introduce Blackwell. Key workloads driving data center growth include learning and inference of generative AI models, pre and post-processing of video, image and text data with CUDA and AI workloads, generation of synthetic data, AI-driven recommendation systems, and processing of SQL and vector databases. Next-generation models require 10 to 20 times more computing to learn with even more data.
- This trend is expected to continue. Over the past four quarters, we estimate that inference has accounted for more than 40% of data center revenue. CSP, consumer internet companies, and enterprises benefit from the incredible throughput and efficiency of NVIDIA inference platforms. The demand for NVIDIA comes from tens of thousands of companies and startups building frontier modelers, consumer internet services and consumers, advertising, education, businesses, and generative AI applications for healthcare, and robotics. Developers want NVIDIA’s rich ecosystem and availability across all clouds. CSP appreciates the broad adoption of NVIDIA and is increasing NVIDIA capacity in light of high demand.
- The NVIDIA H200 platform began ramping in Q2 and shipped to large CSPs, consumer internet, and enterprise companies. Based on the strength of the Hopper architecture, the NVIDIA H200 delivers more than 40% more memory bandwidth compared to the H100. Data center sales in China increased in a row in Q2 and contributed significantly to data center sales. As a percentage of total data center sales, it is below the level before export controls were imposed. We expect the Chinese market to continue to be very competitive.
- The latest MLPerf inference benchmark round highlighted NVIDIA’s reasoning leadership as the NVIDIA Hopper and Blackwell platforms were combined to win gold medals for all tasks. At Computerx with leading computer manufacturers, NVIDIA unveiled Blackwell architecture-based systems and NVIDIA networking to build AI factories and data centers. With the NVIDIA MGX modular reference architecture, OEMs and ODM partners are building over 100 Blackwell-based systems designed quickly and cost-effectively.
- The NVIDIA Blackwell platform integrates multiple GPUs, CPUs, DPUs, NVLink, and Link Switches with networking chips, systems, and NVIDIA CUDA software to drive the next generation of AI in cases, industries, and countries. The NVIDIA GB200 NVL72 system with 5th generation NVLink unlocks the ability of all 72 GPUs to operate as a single GPU, provide up to 30x faster inference for LLM workloads, and run a trillion parameter models in real time. Hopper demand is strong, and Blackwell samples extensively. We have changed the Blackwell GPU mass production to improve production yields. The Blackwell production ramp will begin in Q4 and continue until FY ’26.
- In Q4, we expect billions in Blackwell revenue. Hopper shipments are expected to increase in the second half of fiscal 2025. Hopper supply and availability have improved. Demand for the Blackwell platform is significantly above supply, which we expect to continue next year.
- Networking revenue has continuously increased by 16%. Ethernet revenue for AI, which includes Spectrum-X end-to-end Ethernet platforms, has successively doubled with hundreds of customers adopting our Ethernet products. Spectrum-X has extensive market support from OEM and ODM partners and is being adopted by CSPs, GPU cloud providers, and enterprises, while xAI connects the world’s largest GPU computing cluster. Spectrum-X enhances Ethernet for AI processing and delivers 1.6 times the performance of traditional Ethernet. We plan to launch new Spectrum-X products every year to support the demand to expand computing clusters from tens of thousands of GPUs today to millions of DPUs in the near future. Spectrum-X is well-prepared to launch a multi-billion dollar product line within a year.
- As countries recognize AI expertise and infrastructure as national essentials for society and industry, our sovereign AI opportunities continue to expand. The National Institute of Advanced Industrial Science and Technology in Japan is building an AI Bridging Cloud Infrastructure 3.0 supercomputer with NVIDIA. We believe sovereign AI revenue will reach a low double digit billion this year.
- The corporate AI craze has begun. Companies have also driven quarterly sequential revenue growth. We are working with most of Fortune 100 companies on AI initiatives across industries and regions. AI-powered chatbots,