We’ve all had that uncanny moment when we realize we’ve been talking to “someone” online when we realize it’s a robot responding. Long before the release of ChatGPT mainstreamed the act of talking to “bots” on the internet, non-human accounts were all over the web. MIT computer scientists invented ELIZA in 1966, to simulate conversations with a real human being. Microsoft users met “Clippy” almost exactly three decades later. Despite some users’ unlikely vitriol for the anthropomorphic paper clip, much more malicious bots became obvious to users in the years to come on social media, especially on Twitter in the chaotic election season of 2016.
But the bots are still with us. Officially defined as software applications that run automated, repetitive tasks, bots are still swimming in the digital ether, and they’re a key aspect of the artificial intelligence (AI) revolution that is threatening to undo the internet as it’s been known since the mid-1990s.
The catch is that the surge in bot activity is not just disrupting web traffic—it may also be inflating the internet economy by distorting the very metrics that drive tech company valuations. Automated bots now make up more than half of global internet traffic. Bots surpassed human-generated activity for the first time in 2024, according to Imperva, a subsidiary of cybersecurity giant Thales. Imperva, which issues a “Bad Bot report,” found that almost 50% of internet traffic comes from non-human sources, with 20% of that being so-called “bad bots,” prone to a host of malicious activities.
For example, bots generate fake pageviews, clicks, impressions, and user sessions, all of which inflate top-line web analytics data. This distortion directly impacts metrics including conversion rates, average session duration, and the like. Cybersecurity firms, which admittedly may be talking their book to some extent, claim that ad fraud bots also click on pay-per-click ads or simulate user activity, causing companies to pay for traffic and conversions that never represent real humans. They put the damage at hundreds of billions of dollars per year around the global internet.
Also, consider the startups that showcase “vanity metrics” such as raw user sign-ups or app downloads, many of which can be (and often are) pumped up by bot traffic. These statistics are sometimes self-reported and rarely audited independently. Investors rely on all of these metrics—and more—to assess company value, so fake or inflated data can misrepresent underlying business strength.
Consider the investors that are pumping money into bot-boosted business models, and then consider the wisdom of Torsten Slok, the widely read chief economist for Apollo Global Management, who is known for shaking the financial community with his brief charticles in his “Daily Spark.” He recently posted an eye-popping chart, based off his calculations that “the difference between the IT bubble in the 1990s and the AI bubble today is that the top 10 companies in the S&P 500 today are more overvalued than they were in the 1990s.” In other words, if the AI trade is a bubble, it’s a bigger bubble than the one that popped in the days of the “dotcom crash,” leading to a nasty recession. Slok didn’t address the bot question, but it lends further seriousness to the debate: what if the current AI boom is built on the backs of bots?
Bots and bubbles
This bot-driven inflation may be feeding into a broader tech and AI investment bubble. As companies report rapid user growth and engagement, investors chase the next big thing, and result is a market environment reminiscent of the dot-com era, where hype and inflated metrics risk overshadowing real business fundamentals.
Consider the story of the unicorns: Silicon Valley’s term for private firms with $1 billion-plus valuations. From just a few dozen in 2013, when venture capitalist Aileen Lee coined the term to stress their rarity, unicorns have become anything but. The numbered over 1,200 by 2025, according to Founders Forum, an organization committed to connecting entrepreneurs. Surges in unicorn formation accompanied the “easy money” era of 2018 and 2021, when the Federal Reserve lowered interest rates to nearly unprecedented levels and venture capital money chased risky investments, seeking yield. The money in VC has since largely gravitated to AI, a deeply ironic turn of events.
History suggests that markets eventually correct when reality catches up to inflated expectations. Several factors point to a similar reckoning for AI and the bot problem. Recognition of fake metrics is one. As awareness grows about the scale of bot-driven inflation, investors and analysts could grow more skeptical of headline user numbers and engagement stats. New regulations are beginning to address the economic incentives behind bot-driven manipulation.
Regulating bots on the internet has become a critical focus for governments in response to their growing presence in commerce, social media, and consumer interactions. Bots can be used for both legitimate and malicious purposes: assisting with customer service, but also spreading misinformation, generating fake reviews, scalping tickets, or manipulating public opinion. The U.S. government mainly does this through the Federal Trade Commission (FTC).
What the government is trying to do about it
The FTC is the leading federal agency addressing deception and unfair practices involving bots, especially those affecting consumers and commerce. In 2024, the FTC issued a final rule prohibiting fake and AI-generated consumer reviews and testimonials, which applies to both traditional and AI-powered bots that generate misleading content or endorsements online.
Businesses can also face civil penalties for buying, selling, or disseminating fake reviews or endorsements, whether authored by bots or humans. The rule aims to ensure transparency in online marketplaces and curb deceptive practices.
From Congress, there’s the BOTS Act (Better Online Ticket Sales Act), enacted in 2016 and strengthened by executive order in 2025, that specifically targets the use of automated bots to circumvent controls on ticket purchases for concerts and events, often used by scalpers. The FTC enforces this law, which makes it illegal to use bots to bypass security or purchasing limits when acquiring event tickets. This could be thought of as the “Taylor Swift” law, as fans found, to their displeasure, during her record-setting Eras Tour when new tickets disappeared in seconds, gobbled up by bots.
The FTC also regularly issues business guidance calling for transparency and accuracy about AI chatbots and avatar services, warning against misleading consumers through these technologies. The agency advises companies to clearly disclose when users are interacting with bots, ensure bots do not misrepresent capabilities, and avoid using bots to manipulate or deceive consumers.
Some states, such as California, have passed laws requiring bots to identify themselves when attempting to influence a voter or consumer. Other states have introduced similar bills modeled after California’s “Bolstering Online Transparency Act,” though federal preemption and cross-border challenges remain.
What to watch for
As bot-driven metrics are exposed, companies with inflated user numbers may see their valuations fall, especially if they can’t demonstrate real, sustainable growth. The market may consolidate around companies with proven, human-driven engagement and revenue, while those reliant on artificial metrics struggle or fail. Expect increased demand for third-party verification of user and engagement data, as well as more robust bot-detection and filtering in analytics.
Then again, bots have been a feature of computing for over half-a-century and they’ve just grown more and more plentiful over time. Bot-driven inflation of internet statistics may just become an inevitable part of digital life.
For this story, Fortune used generative AI to help with an initial draft. An editor verified the accuracy of the information before publishing.
This story was originally featured on Fortune.com