MisinformationAugust 22, 20259 min read

From Ghibli to Gaza

When AI images go viral and truth becomes collateral damage

A lie gets halfway around the world before the truth has a chance to get its boots on. Now imagine the lie is a photorealistic image.


March 26, 2025. A user on X named @heyBarsee posts a series of images: ordinary photographs transformed into the unmistakable aesthetic of Studio Ghibli—soft watercolor skies, rounded features, that particular quality of light that Hayao Miyazaki’s studio has spent four decades perfecting. The images are charming, whimsical, and—within hours—everywhere.

#StudioGhibliAI trends globally. ChatGPT’s image generation servers buckle and crash. Millions of users upload family photos, selfies, pet pictures, and vacation snapshots, transforming their lives into anime. The Washington Post runs a piece on the copyright implications. Studio Ghibli’s eighty-four-year-old founder, who once called AI art “an insult to life itself,” is quoted extensively. Nobody pays much attention to what he said. The filter is too fun.

There is nothing wrong with the Ghibli filter, considered in isolation. It’s a toy. A delightful, creative toy that lets people see themselves in a beloved aesthetic. But considered in context—as the same technology deployed differently—it becomes a data point in a much darker trend. Because the same model that transforms your selfie into anime can transform a war zone into a resort.

On February 26, 2025, Donald Trump posted an AI-generated video on Truth Social titled “Gaza 2025… what’s next?” The video depicted Gaza as a luxury destination: beachfront resorts, gleaming infrastructure, smiling families. Trump appeared alongside Netanyahu in the generated footage. The video was not labeled as AI-generated. It was not presented as speculative or aspirational. It was posted as a vision—but received as a document.

The distinction between “vision” and “document” is precisely the space where AI-generated imagery does its most dangerous work. A generated image has no relationship to reality. It is a prediction of pixels, not a record of events. But humans are not wired to process images this way. We see a photograph and we believe. We see a realistic image and we feel. The cognitive shortcut that made photography the most trusted medium in history is now the vulnerability that makes AI imagery the most dangerous.

Germany’s AfD party deployed AI-generated political advertisements ahead of the February 2025 federal election—images described by analysts as “nostalgia machines” that glorified traditional German values through photorealistic but entirely fabricated scenes. In Africa and Asia, political campaigners produced deepfakes of Biden and Trump endorsing local parties. Twenty-six U.S. states have now enacted laws regulating political deepfakes. The horse, one suspects, has left the stable.

But the incidents that disturb most are the quiet ones. The ones that don’t generate headlines because they succeed.

Fox News aired a segment featuring what it presented as a real woman complaining about losing SNAP benefits during a government shutdown. The woman was AI-generated. The segment ran. It was later removed. A teacher named Cheryl Bennett in the UK was driven into hiding in January 2025 after a deepfake video appeared to show her making racist remarks. The video was entirely synthetic. The social consequences were entirely real.

In January 2026, after the U.S. military operation that removed Nicolas Maduro from power in Venezuela, AI-generated videos purporting to show Venezuelan citizens celebrating in the streets went viral. The videos amassed millions of views on TikTok, Instagram, and X. They looked real. Many viewers believed they were real. The celebrations they depicted may or may not have occurred—but the videos that circulated were not evidence of anything except a model’s ability to generate convincing crowd footage.

This is the epistemological crisis that AI imagery creates. It’s not just that fake things look real. It’s that real things now look fake. When any image might be generated, every image is suspect. The existence of AI-generated Venezuelan celebrations doesn’t just spread false information about Venezuela. It undermines the evidentiary value of real footage from Venezuela. It poisons the well.

The deepfake statistics for 2025 quantify the acceleration: 179 incidents in Q1 alone, surpassing all of 2024. Over two hundred million dollars in financial losses from AI-powered deepfakes in the same period. An elderly woman defrauded of fifty thousand dollars by an Elon Musk deepfake. A recently divorced investor scammed of one Bitcoin by a deepfake romance.

What connects the Ghibli filter and the Gaza video and the Fox News segment and the Venezuelan celebrations is not malice. Some of these were playful. Some were propagandistic. Some were criminal. What connects them is capability: the same underlying technology, applied across a spectrum from whimsy to weaponization. The model doesn’t know the difference. It generates what you ask it to generate.

Detection matters here in a way it doesn’t in the slop conversation. Slop is ambient noise. This is targeted signal. When a political actor creates a synthetic image to influence an election, or a scammer generates a deepfake to steal money, or a harasser produces a fake video to destroy someone’s life, the ability to identify that content as AI-generated is not a convenience. It is a civil rights issue.

We can build better detectors. We are building better detectors. But we should be honest about the limits of detection as a strategy. A deepfake video that goes viral for six hours before being flagged has already done its work. The flag is a correction. The damage is the spread. In an information environment where virality and veracity have nothing to do with each other, detection is damage mitigation, not damage prevention.

Hayao Miyazaki was right. AI art is, in some deep sense, an insult to life—not because the images are bad, but because they are unearned. A photograph earns its power from the fact that someone was there, with a camera, in a moment. An AI image earns nothing. It asserts. And in 2025, assertions that look like evidence have become the most potent weapon in the information landscape.



Want to check an image?

Our detection engine analyzes synthetic patterns across 30+ generators. Free, private, and fast.

Try the detector