Welcome to Eye on AI! In this edition...Nvidia set to cash in whether mega AI data centers boom or bust…OpenAI says it will make changes to ChatGPT after lawsuit from parents of teen who died by suicide…Librarians helped test which AI gave the best answers without making stuff up…China seeks to triple output of AI chips in race with the US.
Everyone’s talking about yesterday’s Nvidia’s Q2 earnings. Not surprisingly, most of the focus was on the sophisticated, powerful GPU chips that made the company a $4 trillion symbol of the AI boom.
But the company’s huge bet on AI isn’t just about chips; it’s about the massive, billion-dollar data centers being built to house them. Data center revenue accounts for nearly 88% of Nvidia’s total sales—meaning the GPU chips, networking gear, systems, platforms, software and services that run inside AI data centers.
I’ve been noodling on something CEO Jensen Huang mentioned during the earnings call with analysts and investors—something that ties directly to my obsession with those mega facilities built to house tens of thousands of GPUs, that consume staggering amounts of energy, and which are used to train the massive models behind generative AI. These are facilities like Meta’s planned, gas-fueled campus in northern Louisiana—which President Trump touted yesterday with a photo showing it will sprawl to the size of Manhattan—or OpenAI’s $100-billion-plus Stargate Project.
On the call, Huang touted an Nvidia product called Spectrum-XGS—a hardware and software package that together let separate data centers function like one. Think of it as the pipes and traffic control that moves data between data centers quickly and predictably.
Wait—I know your eyes are already glazing over, but hear me out. One of my nagging questions has long been: What if the billions being bet on these mega AI data centers winds up going bust?
Spectrum-XGS is built for the mega-AI clusters Huang has long predicted. But it is also enables those who can’t manage to build a single mega-facility because of permitting or financing issues, to stitch multiple data centers together into unified “AI factories.”
Until now, there were only two options for finding more compute: add more chips to a single GPU rack, or pack more racks into one giant facility. Spectrum-XGS introduces a third option: link multiple sites so they work together like one colossal supercomputer. AI cloud company CoreWeave, which rents out access to GPUs, is deploying the technology to connect its own data centers.
On the earnings call, Huang highlighted Spectrum-XGS, saying it would help “prepare for these AI super factories with multiple gigawatts of computing all connected together.” That’s the growth story Huang has been telling investors for years.
But what happens if it doesn’t unfold that way? There are several other scenarios: The most dire, of course, would be a true “AI winter,” in which the AI bubble pops and demand for AI from business and consumers plummets. In that case, demand for data centers optimized for AI—whether on mega campuses or in distributed, smaller data centers—would vanish and Nvidia would have to find other sources of revenue. Another possibility is that AI models become much smaller and are mostly used on laptops and mobile devices for “inference,” or outputting results. In that case, the demand for data centers could also drop and Nvidia would need to hedge against that.
However, there is yet another scenario where not many mega-campuses get built but technology like Spectrum-XGS still winds up helping Nvidia. If the mega-campus model falters—because of power shortages, financing constraints, or local pushback—Nvidia could still win if enough demand remains from customers. With technology like Spectrum-XGS, smaller or less centrally located facilities become more usable if demand shifts from mega-campuses to distributed ones.
In other words, Nvidia has positioned itself so that whether the industry keeps building massive new hubs or turns to linking together smaller, scattered sites, customers will still need Nvidia’s hardware and software—as well as its AI chips, of course.
Of course, Nvidia’s hedge doesn’t mean local communities are protected if the massive data centers being built in their backyards end up being white elephants. Towns that banked on jobs and tax revenue could still be left with ghost campuses and hulking concrete shells. And all of this depends on Spectrum-XGS working as promised and major players signing on. Customers haven’t tested it at scale in the real world, and networking—big or small—is always messy.
Still, whether the mega AI data center boom keeps roaring or fizzles, Nvidia is positioning itself to own the invisible infrastructure that underpins whatever future system emerges. Nvidia may be best known for selling the “picks and shovels” of AI—its GPUs—but its networking “plumbing” could help ensure the company wins either way.
Speaking of industry leaders, I hope you’ll check out Titans and Disrupters of Industry, a new podcast hosted by Fortune Editor-in-Chief, Alyson Shontell, that goes in depth with the powerful thought-leaders shaping both the world of business and the very way we live. In this exclusive interview, Accenture’s Julie Sweet discusses the company’s strategic shifts and the impact of AI, tariffs, and geopolitical changes on businesses.
Also: In less than a month, I will be headed to Park City, Utah, to participate in our annual Brainstorm Tech conference at the Montage Deer Valley! Space is limited, so if you’re interested in joining, register here. I highly recommend: There’s a fantastic lineup of speakers, including Ashley Kramer, chief revenue officer of OpenAI; John Furner, president and CEO of Walmart U.S.; Tony Xu, founder and CEO of DoorDash; and many, many more!
With that, here’s the rest of the AI news.
Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman
This story was originally featured on Fortune.com