Slop
The 2025 word of the year that described an internet drowning in AI-generated mediocrity
“Authenticity is becoming infinitely reproducible.”
— Adam Mosseri, Head of Instagram
The word found us before we found the word. For months, people had been struggling to articulate what was happening to their feeds—the uncanny proliferation of images that looked almost right but felt deeply wrong. Soldiers with seven fingers saluting before flags with too many stars. Jesus Christ rendered in impossible lighting, his hands clasped in prayer over anatomically incoherent fingers, his eyes carrying the vacant benevolence of a model trained on a billion stock photos but zero actual human experience.
Then someone called it slop, and suddenly everyone knew exactly what they meant.
Both Merriam-Webster and the American Dialect Society named “slop” their word of the year for 2025. The definition, still coalescing: low-quality AI-generated content produced at scale, distributed without disclosure, consumed without understanding. Slop is not a failure of AI. It is AI working exactly as intended by the people deploying it—a machine for converting compute into engagement, meaning be damned.
A Stanford-Georgetown study released in late 2025 documented entire networks of Facebook pages pumping out unlabeled AI images of Jesus, American soldiers, American flags, and sentimental scenes involving puppies and grandmothers. The pages had millions of followers. The images generated millions of interactions. The operators—often based overseas—monetized the engagement through Facebook’s ad revenue sharing program. They were, in the most literal sense, manufacturing synthetic emotions for profit.
Rolling Stone’s investigation into Facebook’s slop crisis landed like a depth charge. The publication documented how Facebook’s older, less tech-literate user base was particularly vulnerable—people who grew up trusting that a photograph was, by definition, a record of something real. For them, an AI-generated image of a soldier saluting at a grave is not content to be evaluated. It is a moment to be felt. The engagement it generates is sincere. The image that generated the engagement is not.
There is something uniquely corrosive about this transaction. Misinformation implies intent to deceive about facts. Slop operates differently. It doesn’t lie about events. It manufactures sentiment. It creates emotional responses to things that never happened, people who never existed, moments that were never shared. It is, to borrow a term from philosophy, a form of bullshit—communication produced with no concern for truth or falsity, only for effect.
Instagram’s response—the “AI info” label—has been revealing in ways the platform probably didn’t intend. Studies show the label slashes engagement by fifteen to eighty percent. This creates a devastating incentive: if labeling your content as AI-generated causes it to perform dramatically worse, creators are financially motivated to hide AI usage. The transparency mechanism designed to protect users instead penalizes honesty.
Instagram head Adam Mosseri acknowledged the paradox in January 2026, saying that “authenticity is becoming infinitely reproducible.” It’s a striking phrase. If authenticity can be reproduced, it is by definition no longer authentic. What Mosseri described, whether he intended to or not, is the collapse of a signal. When you can no longer distinguish real from synthetic, the category of “real” stops carrying information.
YouTube took a harder line. In July 2025, the platform announced it would no longer pay creators for “mass-produced, repetitive, or AI-generated” content lacking originality. The message was clear: if your creative contribution is writing a prompt and clicking generate, you are not a creator. You are an operator. YouTube will not subsidize the operation.
Meta’s response has been more conflicted. The company’s independent Oversight Board instructed it in late June 2025 to “identify and label manipulated audio and video at scale”—a directive that reads less like guidance and more like an admission that the company had failed to do so. Meta has since announced plans to overhaul its platforms to separate AI and human content, but the timeline remains vague and the implementation details vaguer.
The numbers tell the story of a losing battle. By late 2025, over three billion images per month were being generated using diffusion-model platforms. AI-generated content overtook human-made content in volume by November 2024, reaching fifty-two percent by May 2025. Up to thirty-two percent of all images shared on major social platforms in early 2026 show evidence of partial or full AI augmentation.
We are, to state it plainly, drowning. Not in misinformation—that implies too much intentionality. We are drowning in noise. In content that exists not because someone wanted to say something, but because a system was optimized to produce something that triggers engagement. The ocean of AI slop is not evil. It is indifferent. And indifference at scale is its own form of destruction.
For a detection company, slop presents a different kind of challenge than deepfakes. Deepfakes are adversarial—someone is trying to deceive. Slop is ambient—nobody is trying to do anything except generate clicks. Detecting a deepfake is forensics. Detecting slop is epidemiology. The question is not “was this image created by AI” but “is this image part of a pattern of mass-produced synthetic content designed to exploit human emotional responses for monetization?”
The answer, increasingly, is yes. And the only defense is awareness. Know what you’re looking at. Verify before you feel. Trust, but check. These are not revolutionary ideas. They are the boring, essential habits of an informed person navigating an information environment that has been fundamentally altered by machines that can produce convincing imagery faster than humans can evaluate it.
Slop is the word of the year because it captures what no technical term could: the feeling of being buried in something generated without care, distributed without context, consumed without understanding. It is the texture of an internet that has lost its relationship with truth—not through conspiracy, but through convenience.