Hello and welcome to Eye on AI. In this edition…the U.S. Census Bureau finds AI adoption declining…Anthropic reaches a landmark copyright settlement, but the judge isn’t happy…OpenAI is burning piles of cash, building its own chips, producing a Hollywood movie, and scrambling to save its corporate restructuring plans…OpenAI researchers find ways to tame hallucinations…and why teachers are failing the AI test.
Concerns that we are in an AI bubble—at least as far as the valuations of AI companies, especially public companies, is concerned—are now at a fever pitch. Exactly what might cause the bubble to pop is unclear. But one of the things that could cause it to deflate—perhaps explosively—would be some clear evidence that big corporations, which hyperscalers such as Microsoft, Google, and AWS, are counting on to spend huge sums to deploy AI at scale, are pulling back on AI investment.
So far, we’ve not yet seen that evidence in the hyperscalers’ financials, or in their forward guidance. But there are certainly mounting data points that have investors worried. That’s why that MIT survey that found that 95% of AI pilot projects fail to deliver a return on investment got so much attention. (Even though, as I have written here, the markets chose to focus only on the somewhat misleading headline and not look too carefully at what the research actually said. Then again, as I’ve argued, the market’s inclination to view news negatively that it might have shrugged off or even interpreted positively just a few months back is perhaps one of the surest signs that we may be close to the bubble popping.)
This week brought another worrying data point that probably deserves more attention. The U.S. Census Bureau conducts a biweekly survey of 1.2 million businesses. One of the questions it asks is whether, in the last two weeks, the company has used AI, machine learning, natural language processing, virtual agents, or voice recognition to produce goods or services. Since November 2023—which is as far back as the current data set seems to go—the number of firms answering “yes” has been trending steadily upwards, especially if you look at the six-week rolling average, which smooths out some spikes. But for the first time, in the past two months, the six-week rolling average for larger companies (those with more than 250 employees) has shown a very distinct dip, dropping from a high of 13.5% to more like 12%. A similar dip is evident for smaller companies too. Only microbusinesses, with fewer than four employees, continue to show a steady upward adoption trend.
A blip or a bursting?
This might be a blip. The Census Bureau also asks another question about AI adoption, querying businesses on whether they anticipate using AI to produce goods or services in the next six months. And here, the data don’t show a dip—although the percentage answering “yes” seems to have plateaued at level below what it was back in late 2023 and early 2024.
Torsten Sløk, the chief economist at the investment firm Apollo who pointed out the Census Bureau data on his company’s blog, suggests that the Census Bureau results are probably a bad sign for companies whose lofty valuations depend on ubiquitous and deep AI adoption across the entire economy.
Another piece of analysis worth looking at: Harrison Kupperman, the founder and chief investment officer at Praetorian Capital, after making what he called a “back-of-the-envelope” calculation, concluded that the hyperscalers and leading AI companies like OpenAI are planning so much investment into AI data centers this year alone that they will need to earn $40 billion per year in additional revenues over the next decade just to cover the depreciation costs. And the bad news is that total current annual revenues attributable to AI are, he estimates, just $15 billion to $20 billion. I think Kupperman may be a bit low on that revenue estimate, but even if revenues were double what he suggests (which they aren’t), it would only be enough to cover the depreciation cost. That certainly seems pretty bubbly.
So, we may indeed be at the top of the Gartner hype cycle, poised to plummet down into “the trough of disillusionment.” Whether we see a gradual deflation of the AI bubble, or a detonation that results in an “AI Winter”—a period of sustained disenchantment with AI and a funding desert—remains to be seen. In a recent piece for Fortune, I looked at past AI winters—there have been at least three since the field began in the 1950s—and tried to draw some lessons about what precipitates them.
Is an AI winter coming?
As I argue in the piece, many of the factors that contributed to previous AI winters are present today. The past hype cycle that seems perhaps most similar to the current one took place in the 1980s around “expert systems”—though those were built using a very different kind of AI technology from today’s AI models. What’s most strikingly similar is that Fortune 500 companies were excited about expert systems and spent big money to adopt them, and some found huge productivity gains from using them. But ultimately many grew frustrated with how expensive and difficult it was to build and maintain this kind of AI—as well as how easily it could fail in some real world situations that humans could handle easily.
The situation is not that different today. Integrating LLMs into enterprise workflows is difficult and potentially expensive. AI models don’t come with instruction manuals, and integrating them into corporate workflows—or building entirely new ones around them—requires a ton of work. Some companies are figuring it out and seeing real value. But many are struggling.
And just like the expert systems, today’s AI models are often unreliable in real-world situations—although for different reasons. Expert systems tended to fail because they were too inflexible to deal with the messiness of the world. In many ways, today’s LLMs are far too flexible—inventing information or taking unexpected shortcuts. (OpenAI researchers just published a paper on how they think some of these problems can be solved—see the Eye on AI Research section below.)
Some are starting to suggest that the solution may lie in neurosymbolic systems, hybrids that try to integrate the best features of neural networks, like LLMs, with those of rules-based, symbolic AI, similar to the 1980s expert systems. It’s just one of several alternative approaches to AI that may start to gain traction if the hype around LLMs dissipates. In the long run, that might be a good thing. But in the near term, it might be a cold, cold winter for investors, founders, and researchers.
With that, here’s more AI news.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
Correction: Last week’s Tuesday edition of the newsletter misreported the year Corti was founded. It was 2016, not 2013. It also mischaracterized the relationship between Corti and Wolters Kluwer. The two companies are partners.
Before we get to the news, please check out Sharon Goldman’s fantastic feature on Anthropic’s “Frontier Red Team,” the elite group charged with pushing the AI company’s models into the danger zone—and warning the world about the risks it finds. Sharon details how this squad helps Anthropic’s business, too, burnishing its reputation as the AI lab that cares the most about AI safety and perhaps winning it a more receptive ear in the corridors of power.
This story was originally featured on Fortune.com