The industry leader in graphics processing units (GPUs), Nvidia, recently reported a startling $14 billion profit for the latest quarter, mostly due to the spike in demand for its processors that power artificial intelligence. CEO Jensen Huang announced the business would start introducing new chip architectures every year, a major acceleration from its previous two-year cycle, as a way to demonstrate its desire to dominate the rapidly expanding AI landscape.
During the Q1 2025 earnings call, Huang said, “We’re on a one-year rhythm,” confirming rumors in the industry that the codenamed “Rubin,” the replacement for the present Blackwell architecture, will be released in 2025. This would mean that the much awaited R100 AI GPU would launch within the year.
Beyond only AI-focused chips, the whole Nvidia product line is subject to this rapid release cycle. “We will proceed with them all at a rapid pace,” Huang said. “New CPUs, new GPUs, new networking NICs, new switches… a mountain of chips are coming.”
This bold approach is being taken by Nvidia at a time when demand for its H100 AI GPUs is higher than ever thanks to the industries’ quick adoption of generative AI technology. Some customers have placed orders for over 100,000 units of these potent chips as they scramble to get their hands on them. Interestingly, by the end of the year, Meta, the parent company of Facebook, plans to have over 350,000 H100 GPUs in use.
“Do you want to be the company delivering groundbreaking AI, or the company, you know, 0.3 percent better?” Huang asked, highlighting the strategic importance for businesses to stay ahead in the AI race.
Noting Tesla’s purchase of 35,000 H100 GPUs for its self-driving program, Nvidia’s CFO continued, saying the automotive industry is on track to become its “largest enterprise vertical within data center this year.”