DetectionGenerator Profile

Detect Stable Diffusion Images

Stable Diffusion XL / SD 3.5 by Stability AI

Stable Diffusion is the dominant open-source image generation architecture, running locally on consumer hardware worldwide. Because it is open-source, outputs vary enormously depending on the checkpoint, sampler, CFG scale, and post-processing pipeline used. This variability makes detection both easier (metadata is often preserved) and harder (the output distribution is extremely broad).


Forensic Signals

Known Artifacts

01

PNG tEXt chunks containing full generation parameters: prompt, negative prompt, sampler, CFG scale, seed, and model hash

02

ComfyUI and Automatic1111 workflow metadata embedded in image files from popular local interfaces

03

Characteristic noise patterns that vary by sampler (Euler, DPM++, DDIM) and are identifiable via spectral analysis

04

Fine-detail degradation in high-frequency regions, particularly hair strands, fabric weave, and foliage

05

VAE decoding artifacts visible as subtle grid patterns at 8x8 pixel boundaries in unprocessed outputs

06

Color banding in smooth gradients, particularly sky regions, due to latent space quantization


Methodology

How DeepSight Detects Stable Diffusion

DeepSight's metadata layer is particularly effective for Stable Diffusion, as local generation tools frequently embed generation parameters directly into image files. When metadata is present, detection is near-certain. For stripped images, our forensic layers analyze VAE decoding signatures, sampler-specific noise patterns, and the characteristic frequency-domain artifacts of the latent diffusion architecture.


Provenance Layer

What DeepSight Checks

  • 1

    PNG tEXt/iTXt chunks with generation parameters including model hash, prompt, seed, and sampler configuration

  • 2

    EXIF Software field set to values like "ComfyUI," "Automatic1111," "InvokeAI," or "Fooocus"

  • 3

    Model hash metadata linking to known Stable Diffusion checkpoints on CivitAI and HuggingFace

  • 4

    Workflow JSON embedded by ComfyUI containing the full node graph of the generation pipeline


Common Questions

Frequently Asked Questions

Can DeepSight detect images from custom Stable Diffusion models (LoRAs, fine-tunes)?+
Yes. While custom models alter the aesthetic output, the underlying architecture produces consistent forensic signals. Our detection targets the latent diffusion process itself rather than specific model weights, making it effective across the broad ecosystem of community checkpoints and LoRA modifications.
Why do some Stable Diffusion images have generation data and others don't?+
It depends on the interface used. Automatic1111 and ComfyUI embed parameters by default. Some API services, mobile apps, and third-party tools strip this metadata. Users can also manually strip it. When metadata is present, DeepSight achieves near-perfect detection accuracy.
Does running Stable Diffusion locally make images harder to detect?+
Not inherently. Local generation often preserves more metadata than cloud services, which actually aids detection. The pixel-level forensic signals are identical regardless of where the model runs. Hardware differences do not affect the generation artifacts that our models analyze.
Can you detect Stable Diffusion inpainting or img2img edits?+
Partial AI manipulation (inpainting, img2img) is one of the hardest detection challenges across all generators. DeepSight's error level analysis can identify regions of an image with inconsistent compression and noise characteristics, which often indicates selective AI editing.

Try it now — upload a Stable Diffusion image

Our detection engine analyzes Stable Diffusion outputs across metadata, statistical forensics, and vision-based analysis. Free, private, and fast.

Analyze an image