Over the past year, the conversation around artificial intelligence has largely focused on productivity, automation, and economic disruption. From research papers by companies like Anthropic to think pieces in major outlets like The New York Times, the dominant narrative has been clear: AI will reshape jobs, boost efficiency, and redefine knowledge work. But this framing misses a more immediate and visible transformation already underway — AI is actively degrading the internet itself.
Much like earlier studies that tried to map AI capabilities to job categories, recent research attempts to quantify how AI will integrate into existing systems. Anthropic’s report introduces the idea of “observed exposure,” combining theoretical AI capability with real-world usage. On paper, this sounds rigorous. In practice, it largely focuses on high-value use cases such as drafting business communications, coding applications, or assisting with academic work — the kinds of examples that appear in marketing decks and investor presentations.
What these studies consistently overlook is how AI is actually being used at scale across the internet. A new report by journalist Jason Koebler at 404 Media highlights two dominant use cases: mass-produced “AI slop” content and synthetic adult material. These outputs are not edge cases — they are flooding platforms, gaming search algorithms, and overwhelming human-created content. According to the report large portions of social feeds, product listings, and even informational websites are now dominated by low-quality, AI-generated material designed purely to capture clicks.
This explosion of synthetic content is having measurable consequences. Search engines like Google are increasingly surfacing AI-generated pages that are optimized for keywords but lack substance. At the same time, AI chatbots are disintermediating traffic entirely. Tools like ChatGPT and similar assistants answer user queries directly, reducing the need to visit original sources. This shift is already impacting publishers, many of whom relied on search traffic and advertising revenue to sustain operations.
The result is a feedback loop that is difficult to reverse. As AI-generated content floods the web, it becomes part of the training data for future models. Those models then generate even more content, often blending fact, fiction, and noise. Researchers have begun to warn about “model collapse,” where AI systems degrade over time as they train on their own synthetic outputs rather than human-created data. A study highlighted by arXiv,, shows how this cycle can reduce quality and diversity in AI outputs over time.
“Anthropic’s research continues a time-honored tradition by AI companies who want to highlight the ‘good’ uses of AI that show up in their marketing materials while ignoring the world-destroying applications that people actually use it for,” argues Koebler. “Meanwhile, as we have repeatedly shown, huge parts of social media websites and Google search results have been overtaken by AI slop. Chatbots themselves have killed traffic to lots of websites that were once able to rely on ad revenue to employ people, so on and so forth…”
Meanwhile, the economic impact extends beyond theory. Entire categories of online work — from freelance writing to niche publishing — are being undercut by AI systems that can produce infinite volumes of content at near-zero cost. Traffic declines are already being reported across independent websites, as users either receive answers directly from AI or encounter search results saturated with low-value pages. A recent analysis from Pew Research Center points to shifting user behavior, with more people relying on AI summaries instead of browsing traditional websites.
Critics argue that this reality is being downplayed. As Koebler has noted, AI companies have strong incentives to highlight productive, aspirational use cases while ignoring the more chaotic and harmful ones. The narrative of AI as a productivity tool is far easier to sell than the reality of an internet increasingly dominated by spam, automation, and synthetic media.
“This is all to say that these studies about the economic impacts of AI are ignoring a hugely important piece of context: AI is eating and breaking the internet and social media,” writes Koebler, in closing. “We are moving from a many-to-many publishing environment that created untold millions of jobs and businesses towards a system where AI tools can easily overwhelm human-created websites, businesses, art, writing, videos, and human activity on the internet. What’s happening may be too chaotic, messy, and unpleasant for AI companies to want to reckon with, but to ignore it entirely is malpractice.”
What is emerging is not just a technological shift but a structural transformation of the web itself. The internet was built as a many-to-many publishing system, where individuals and organizations could create content, build audiences, and generate economic value. AI is compressing that system into something far more centralized and automated — where a small number of models generate content, and users interact with those models instead of the open web.
The long-term implications remain uncertain. But one thing is already clear: the impact of AI on the internet is not a future risk to be modeled — it is a present reality unfolding in real time. And until research, policy, and industry discussions begin to account for how AI is actually being used — not just how it could be used — we may be underestimating just how quickly the foundations of the web are being rewritten.