A while back, I made some predictions about the course of the current AI craze, and wondered why people would, in essence, hire a worker whose response to not knowing an answer was to bullshit floridly.
In that article, I postulated a few reasons for people to put their trust in these bullshit machines. One was that we are experiencing a “novel epidemic,” with no immune system response built to handle the specific kind of bullshit being lobbed at us by AI vendors. I predicted at the time that—much like 1950s TV commercials seemed compelling to viewers at the time but corny and stupid to the generations that came after—we would soon start to develop an immune response.
The first signs of that response have now become apparent. To understand the form it takes, let’s take a look at another property famed for its infinite generative properties: Minecraft.
Minecraft Syndrome, or, “Haven’t I Been Here Before?”
One of the coolest parts about starting to play Minecraft is the thrill of truly infinite exploration. It doesn’t matter what direction you start walking in, or what coordinates you teleport yourself to—it will go on forever, in every direction, and you will never run out of new territory to see for the first time.
But there’s a reason that hardcore Minecraft players, the ones who play for tens of thousands of hours, almost never view exploration as the core game loop. If you like crafting, or building, or programming giant factories, you may get hooked on Minecraft for years. If, on the other hand, you’re drawn in by the promise of an infinitely explorable world, you’re almost certain to be disappointed.
Why? Because of something I’ll term procedural fatigue.
In Minecraft, the first time you see a new, wild landscape type you’ve never seen before, it can be positively exhilarating. Here is a crack in the earth where lava is exposed, running out like waterfalls! Here’s a rainforest brimming with wildlife and plants. Here’s a village full of people. Wow, a secret dungeon that goes on for ages!
What’s more, when you next encounter a village in another location, it won’t be the same village. It’ll be different, often quite different in terms of size, population, and layout. The next crack in the earth you see with the lava waterfalls will be different from the last one, too. And the next dungeon.
This is all very fun, until it isn’t.
Sometime after anywhere from 10-1000 hours playing Minecraft (depending on your individual tolerance), you’re likely to find that something fundamental has changed about how you see the “infinite” world. In spite of each new zone looking different from the last, they all start to seem…sort of the same.
See one dungeon, and you’ve had a unique experience. See three and you’ll marvel at the small changes that make them feel different. See a hundred, and no matter what changes, it’s still just another dungeon. You might still play, but you’re not going to have that feeling of awe and wonder again.
That’s procedural generation for you. From a random seed, these generated worlds can go on infinitely—but they’re created with procedural rules that (even if we can’t see the rules ourselves) become apparent as we familiarize ourselves with the landscape. Over time, we spot the patterns and it all looks the same as the last time we encountered a village.
The most incredible part of this procedural fatigue is that it happens even for individuals who have no idea how Minecraft actually works, who have never written a line of code in their lives. You don’t have to know anything about the underlying mechanisms that create the world of Minecraft in order to experience the procedural fatigue, the collapse of all the “individual” scenes into a few categories. It’s baked into your cognition.
Humans do this because we are finely tuned pattern recognizers. Our cognitive capabilities come with significant limitations, and it’s necessary to our survival as a species that we remain capable of cognitively “collapsing” a large number of superficially different objects or places into a single category.
We don’t want to have to re-evaluate every new stray cat or songbird we see on our evening walk as a totally new entity worthy of its own entry in our memory. Instead, we log it as “stray cat, black” or “cardinal, male.” As we drive through suburbs, we rarely think about the distinctiveness of each beige subdivision. They’re all just subdivisions. Unless we have some new and unusual interaction that renders one of these entities unique in our cognition—say, it’s the particular subdivision where your best friend lives, or the stray cat comes up to be petted three walks in a row—your brain simply files it away in a category rather than considering it individually.
When that happens, the relative value of the categorized item goes down. Something sui generis, or at least very rare, will capture your attention and money and time. When novel output becomes fungible, a commodity, it loses value. Even the novelty itself wears thin after a time.
How Sizzle Turns to Slop: LLMs as Procedural Generation
LLMs, like Minecraft, have essentially procedurally generated outputs. The procedures are more complicated and use more compute resources, but they are still all based on the same underlying architecture.
Essentially, the user input for a LLM constitutes the “seed,” which then generates outputs based on extremely complex, heuristic procedural rules. The outputs can appear vast in variation, especially at first—but over time, the shape of the system reveals itself.
The reason LLM output seems so miraculous and full of potential at first is the same reason that first-time Minecraft players of all ages often end up staying up all night exploring. When it all looks brand new, procedurally generated content is fascinating, captivating, almost hypnotic.
Who wasn’t amazed the first time they saw an LLM give a pretty good response to an initial query? Nearly everyone who starts using ChatGPT or Claude or any other LLM chatbot will try it out on some task where its effort seems as good or better than human output that could be created in the same time—say, responding to a boss’s idiotic email with professionalism and tact.
The boss sees the email, it goes over well, and you think: this could really be something. Hell, maybe it’ll take your job, if it’s that good. Better start networking. Better start using LinkedIn more. Hey, maybe the LLM could write me a post that would get engagement. You ask it for a post, and it spits this out:
The most dangerous vulnerability in cybersecurity isn’t a zero-day. It’s misaligned incentives.
You can patch a CVE.
You can’t patch a business that rewards people for hitting quarterly goals at the expense of long-term risk.
You can’t patch a CISO whose bonus depends on "no incidents reported"—even if that means brushing real ones under the rug.
You can’t patch a sales team that pushes out insecure product builds to meet revenue targets.
You can’t patch a board that sees security as a compliance checkbox but not a strategic function.
We spend billions every year trying to fix security problems downstream of bad decisions. But security isn’t a tooling issue. It’s not even a technical issue.
It’s an incentive issue.
Until you align what people are rewarded for with what keeps the company safe, no tech stack will save you.
This is why so many breaches aren’t failures of technology—they’re failures of leadership.
I’m curious—what’s the biggest misaligned incentive you’ve seen that made security worse, not better?
👇 Let’s get specific. The real stories are what help teams do better.
#cybersecurity #leadership #incentives #infosec #risk #securityculture #businessstrategy
Look familiar?
At this point, probably 90% (this may be a conservative estimate) of LinkedIn “influencers” are using not only LLMs, but specifically the ChatGPT 4o model to develop their posts.
The first posts that looked like this got a lot of engagement. After seeing a few dozen, though, the patterns become very apparent:
Lots of fast “it’s not this, it’s that” contrast language that sounds important but is really a strawman when you break it down (does anyone really think security is just a tooling issue?)
Super-short paragraphs designed to foster engagement on platforms that encourage low attention spans (like LinkedIn)
Parallelisms that exist to drive home rhetorical points, but are fundamentally empty and could be expressed in far fewer words
And, of course, the stupid emoji and line of hashtags.
How many of these posts can you read before they just become wallpaper? Some of the little techniques and tics GPT uses were once sprinkled by copywriters into their copy to give it a little sizzle. Once these techniques are overused to the point of occurring several times per paragraph and show up in every post in your scrolldown, their effectiveness craters. These rhetorical patterns turn from sizzle to slop.
Over on Reddit, subreddits ranging from /r/ChatGPT to /r/LinkedIn to the fan subreddit for Ed Zitron’s outstanding podcast, /r/BetterOffline, are full of people making fun of these stylistic tics from AI-generated posts.
Look hard enough and you’ll even find AI slop about AI slop.
Often, you see the little germ of a point somewhere in there. “Did LinkedIn ever have any integrity to lose?” could be a worthwhile question, but any individually interesting tree is lost in the procedurally generated forest.
After a while, the “dead internet” people start making sense: you start to see AI slop responses to AI slop content all over LinkedIn, and “influencers” who exist purely as AI slop posts and AI slop answers to other AI slop influencers.
You Can’t Un-Minecraft Minecraft
Once it all becomes wallpaper, the obvious next step would be to tell it to stop doing that.
Stop the sentence fragments, stop the relentless use of em dashes. Stop sounding like ChatGPT, for God’s sake.
That’s when you find out that in all the talk about how LLMs can “learn from their mistakes” and take human direction is just talk:
Even if you can make it produce a few lines without the offending pattern, the relief won’t last long: next time you prompt, it’ll just go back to the same old patterns.
Minecraft still looks like Minecraft, even if you mod it. Procedural generation still looks procedural.
If GPT changed the way it “weights” inputs, it would start sounding new and fresh again…for a bit, until you once again see the procedural patterns.
The Great Flattening
If the Internet isn’t really “dead” yet, it’s certainly reaching the age where it’s not what it once was.
Once upon a time, for the first several decades of the world wide web, different websites felt and looked very different from one another. You’d read a news article, and it would feel different from a forum post, which would feel different from an SEO-bait web page designed to convert search clicks into revenue.
As Google searches convert into ChatGPT prompts, while businesses in every industry replace professional copywriters and editors with low-effort GPT garbage output, it all starts to look the same. Zero-effort content can proliferate a thousand times faster than anything decent that adds real value or novel thought, destroying any hope of using the internet in the way we once did (I knew this was inevitable a year ago, and we’ve just started to see the results in action).
As the great flattening continues, all that value the internet brought—the value of the real, highly varied, wild internet from before we’d ever heard of GPT—doesn’t disappear. People still want to get real information about real topics. People still want to find recommendations for products, and then buy them.
As procedural generation fatigue sets in, the perceived value of additional procedurally generated content plummets (which I also talked about when I talked about the inevitable death of search engines).
If the AI producers are lucky, this will continue until it asymptotically approaches zero, leaving the biggest vendors squabbling over the water rights to a fast-drying puddle.
If they are unlucky (and I believe they may well be), people will begin to regard this content as actually delivering negative value, to the point where both individuals and businesses will believe there is value in excluding LLM-generated content. In this situation, a new market will evolve to actively push back against generative content, with VC money flowing into companies designed to protect users from the tsunamic tide of slop that threatens to destroy their ability to search for true information, engage socially with other humans, or grow a business or social endeavor.
What You’ll See Next
Before the bubble bursts and the money pivots to helping people sort out the high-value, high-effort content from the slop, you’ll see a few warning signs.
We’re not there yet: today, a lot of absolute dreck produced by GPT with little-to-no human tuning still goes viral on Facebook and LinkedIn and X and BlueSky.
Remember, procedural fatigue only sets in after a certain degree of exposure. Early adopters, like me (and probably you, if you read this blog), notice first. It’s why comments sections on obviously AI-generated ragebait slop now usually include one or two “this is just GPT, guys” comments alongside comments from non-early-adopters who can’t see the procedurally generated forest for the trees.
Sooner or later, though (and it’ll be sooner than you think—maybe 6-12 months), even AI-naive Boomers will be able to recognize the stink of AI slop. You’ll know fatigue has set in among larger user segments, not just early adopters, when you see some of these signs:
GPT will re-tune its model to kill its current tics: it will stop em-dashing and “not just this, but that”ing all over the place, and instead develop new identifiable tics that take another 6-12 months for people to identify as GPT slop.
LLMs will begin to offer tuned models that claim to speak with a specific authorial voice (and initially, to a person naive to that specific model, will appear novel and unique).
AI detection arms races will heat up as people start to seek high-effort, high-value content in a sea of slop.
More and more “normie” spaces online will start to ban obvious slop, and battles will ensue when people claim not to be using AI—some of them, particularly the younger generation of users, will even be telling the truth, and will have ended up in the regrettable position of learning how to write from AI slop to the point where their actual high-effort writing sounds just like it.
Search becomes visibly unusable. Users stop finding what they need via Google (or even via ChatGPT) because the top-ranked or generated content is all SEO-churned sludge or indirect summaries of other AI-generated sludge. Bounce rates increase. Reddit and Discord, previously used as search modifiers, become useless as AI slop generation overwhelms human content production.
These are all signs of the value of AI-generated content asymptotically approaching zero, and the heat will be on.
But as long as the AI companies can stay afloat for another year or two, in spite of their ridiculous burn rates (many, like Ed Zitron, are rightly very skeptical that this will be the case), they will scramble and compete even harder as value declines. They will turn desperate, making increasingly wild claims to keep the dream afloat.0
The real collapse of the AI economy will happen not when the value approaches zero, but when it turns negative.
What will that look like? I’ve got a few ideas:
AI-generated slop gets banned in contracts, with penalties for violators. No, it won’t be possible to catch everyone using GPT, but the legal risk will restrain a lot of larger companies.
Executives push back against using AI slop as LinkedIn comments sections fill up with mockery and derision regarding their obvious no-effort generative AI use.
The first lawsuits hit. Individuals or small businesses sue AI companies or their customers for reputational or financial harm caused by hallucinated content passed off as authoritative. Courts begin to question the presumed harmlessness of “helpful” LLMs.
User trust starts to default to private groups. Public platforms like LinkedIn, Medium, and Quora hemorrhage credibility as users seek out closed communities with content moderation, voice verification, or human curation. Discords, newsletters, and Slack groups surge (with constant fighting over whether specific individuals are using AI).
Eventually—as I said in my post about the coming death of search engines—curation will rule the day. With luck, by the time this happens, the spending of AI companies will become so untenable that the entire house of cards collapses, leaving a few specific, tailored, smart LLM use cases alive while humanity collectively spits on the corpses of thousands of never-profitable hopefuls.
Hastening the Demise of the LLM Slop Era: What You Can Do
Even if the flow of slop stopped completely, this minute, it would take years to undo the sloppification of the web. But the sooner it stops, the sooner we can begin to actually use the internet again.
So what can any one individual do?
The single most important thing you can do to push back against the avalanche is this:
When you see something, say something.
AI slop on your LinkedIn feed? In your favorite subreddit? In a Facebook group? Call it out. In fact, don’t just call it out: mention the slop warning signs. Enrich the slop detection capabilities of your friends, family, and colleagues to whatever degree you can.
In the same vein, when you get an AI slop private message, reply to the sender with mockery or derision.
It’s not pointless: I recently saw an executive at a large company stop all automated, AI-based LinkedIn message sends because one person responded to the automated send with “Gross.” That’s all it took. Now that exec writes messages directly again, instead of relying on AI tools. Until they see pushback, they won’t change. Why would they?
Early adopters turning into burned-out early skeptics won’t move the needle. Median users pushing back will.
It’s true that you won’t usually change the mind of dedicated slop poster or “influencer,” but that’s not the point. You’re giving the non-early-adopters, the normies, the cognitive tools to recognize procedurally generated content better in the future.
In a world of slop peddlers, be a fatigue accelerationist. Fatigue will happen no matter what, but the sooner it happens, the less of our economy and energy will continue to flow directly into the sewer of LLM slop.