The Competition is Coming for Nvidia

It’s about to get hot in Nvidia’s kitchen. All year, competitors watched as the chipmaker added hundreds of billions in market cap and became AI operators’ go-to technology. They saw startups scrambling to rent time on its H100 chips, VCs buying thousands of them, and Nvidia CEO Jensen Huang go from relative unknown to tech icon in a blink. Here comes their response.

After ceding much of the early generative AI boom to Nvidia, these competitors are releasing — and proving the value of — chips that can run large language models in a similar fashion. Tech giants like Alphabet, Amazon, and Microsoft are designing their own AI chips and training models on them. Rival chipmakers like AMD and Intel are producing AI-optimized chips themselves. And amid a supply crunch, startups are willing to try alternatives. Now, after a recent spate of releases, Nvidia’s easy times are ending.

“It’s the law of competition,” David Trainer, CEO of research firm New Constructs, told me. “People are going after this space because there’s an enormous amount of money to be made in it.”

First among Nvidia’s challengers are fellow trillion-dollar companies Amazon, Google, and Microsoft. It went largely unnoticed last week, but Google’s announcement of its Gemini AI model left Nvidia out entirely. Google trained Gemini on its own chips, called Tensor Processor Units, or TPUs, not Nvidia’s H100s, leaving industry insiders buzzing.

While virtually no one in the industry thinks Nvidia will lose its advantage overnight, the narrative that AI can’t function without it is coming apart.

By developing their own chips, these tech giants — who are also major Nvidia customers and partners — get more customization, better supply access, custom chips built for their use cases, and some cost relief. That’s why Amazon, Google, and Microsoft are on record about using or building alternatives. And why OpenAI, a key Nvidia customer, is reportedly exploring its own chips as well.

“They’re seeking to reduce costs internally,” Insider analyst Jacob Bourne told me. “That puts pressure on Nvidia to keep providing the best hardware, but also to potentially lower the costs.”

The tech giants also offer AI services through their cloud divisions, and they can sell training, inference, and other AI services on their own chips, even as they partner with Nvidia. Google’s already made its TPUs available for customer use. Microsoft will use its chips underneath its AI subscription service. And Amazon has announced that Anthropic, the AI research lab it invested more than $1 billion in, will use its chips through AWS. “They’re public cloud providers,” said Bourne. “They have an extensive data center footprint, and extensive data center infrastructure. They could potentially rely more on their own hardware.”

Meanwhile, professional chipmakers, including Intel and AMD, are also jumping in. They see that demand for AI chips can sometimes surpass Nvidia’s ability to deliver them, and view the high cost of H100s as an opportunity to undercut their rival.

On Thursday, Intel announced a new chip, called Gaudi3, which competes with Nvidia’s H100. Intel CEO Pat Gelsinger told me that he’s looking to capitalize on the moment in an interview on Big Technology Podcast this week. “Customers are looking for alternatives,” Gelsinger said. “All of a sudden, customers are saying, huh, they’re showing up winning some of the benchmarks, I want an alternative, and Nvidia’s short on supply. And I’m getting a much better total cost of ownership from Intel. Hey, let’s go start testing this.”

Last week, AMD introduced new MI300 chips that are purpose-built for AI training and inference. The chips, said Insider’s Bourne, can be swapped with Nvidia’s chips without too much disruption to AI operations. After AMD’s announcement, Nvidia shot back with a blog post claiming its chips were twice as fast. That may be true, but it showed its rival had caught its attention.

Even with growing competition, Nvidia isn’t likely to fade anytime soon. It has a multi-year lead on research and development after its focus on gaming — which requires similar computing — proved extremely fortunate. Its GPU chips are still more powerful for AI training than its rivals’ accelerators. It’s building more chips than its competitors. And startup founders like that Nvidia’s software allows them to train and run AI models more easily.

“At least next year, Nvidia is going to maintain very strong AI market share,” Tristan Gerra, a senior research analyst at Baird, told me. “We’re looking 81, 82% market share in AI this year, going to 78% next year, so that’s pretty minimal decline.”

Still, over time, Nvidia won’t have the AI chip industry to itself. And as the industry moves to smaller AI models, which require less computing power, that may accelerate the shift. The company will still be a fact of life — and the likely leader — in AI chips for years. But it’ll have to work harder for every customer and dollar now that its competitors have arrived in force.

The post The Competition is Coming for Nvidia appeared first on TheWrap.