guide··9 min read

How to Read an ELA Map: A Practical Walkthrough

Error Level Analysis is the most-cited image forensic technique and the most-misread. Here's how to actually interpret an ELA visualization — what's signal, what's noise, and where ELA misfires.

Error Level Analysis is the technique that made image forensics popular online — and the technique most people read wrong. Half the "this photo is fake!" threads on social media point at an ELA map and draw the opposite conclusion of what the map actually shows. Here's how to read one without embarrassing yourself.

What ELA actually does

The procedure is simple:

  1. Take an image
  2. Save it as JPEG at a known quality (e.g. 90)
  3. Decode that JPEG back to pixels
  4. Subtract the result from the original
  5. Multiply the difference by some amplification factor (usually 10–25×)
  6. Display the result

What you see is a per-pixel measurement of how much each pixel changed under recompression. That's it. Everything else is interpretation.

What ELA shows you (the actual signal)

A pixel that changes a lot under JPEG recompression carries detail JPEG can't preserve at quality 90 — usually edges, sharp boundaries, or fine texture. A pixel that doesn't change much is in a region JPEG already represents efficiently — usually smooth color, blur, or low detail.

So in a real photo, you should expect:

  • Bright pixels at edges, fine textures, hair, foliage, intricate fabric
  • Dim pixels in smooth regions: skin, sky, walls, out-of-focus background
  • Roughly uniform "energy density" within regions of similar texture

That's a healthy ELA. Most real photos look like that.

What "splice detection" actually looks like

The original use case for ELA was detecting edited photos. The theory: if you splice in content from another image, that content has been compressed twice (once in the source, once when the spliced image was saved). Different compression histories produce different ELA brightness levels in the same image.

So a "good" splice in ELA looks like a rectangular region with ELA brightness markedly different from its surroundings — usually dimmer (because the spliced content has been compressed more times and has less detail to throw away).

Watch for:

  • Sharp brightness boundaries that don't match the visible edges
  • Identifiable shapes — a dimmer rectangle that doesn't correspond to anything in the visible image
  • Mismatched ELA levels between regions that should compress similarly

Be skeptical of:

  • Bright halos around objects (that's just edge detection)
  • Bright eyes in portraits (eyes have detail; eyes are bright in ELA)
  • Bright text overlays (text is sharp; text is bright)

Those are not splice signals. They are normal.

What an AI-generated image often looks like under ELA

Here's where ELA tells a different story than its original purpose. Most AI-generated images decode straight from latent space and get saved at one compression level. They've never been recompressed before, never been edited, never had any region "live" at a different quality from another region.

Under ELA, that often produces a uniformly dark map — almost no per-pixel delta anywhere. The image survives recompression because it was already at one quality level. Real photos, in contrast, almost always have at least some brightness in detail-heavy regions.

A nearly-black ELA map is a signal worth noticing. Not proof — JPEG-from-camera at quality 95+ can also look this way. But if you also see a smooth FFT spectrum and clean noise residual? Three signals agreeing is when you start to call it.

Where ELA misfires

Be honest about ELA's failure modes:

  1. Screenshots — content from a screen has been re-rendered and re-encoded multiple times. ELA reads chaotic.
  2. Fresh JPEGs from a phone — the phone's pipeline already optimizes for the chosen JPEG quality. ELA shows expected pattern but says little about edits within that pipeline.
  3. PNG-to-JPEG conversions — the conversion creates uniform recompression. Looks suspicious; isn't.
  4. Heavy filtering (Instagram, beauty filters) — most filters re-render large regions, blurring out splice tells.
  5. Very small images — under 500px on the long side, ELA noise overwhelms signal.
  6. Images recompressed in transit — most messaging apps and social platforms recompress everything they accept. The original ELA tells are gone before you see the image.

If you're looking at an image from Instagram or Twitter, your ELA map is mostly telling you about the platform's encoder, not about the source. Always note the source pipeline before drawing a conclusion.

How to actually read an ELA map (the algorithm)

When you load a result, work through this list in order:

  1. What did the source pipeline do? Camera-direct? Screenshot? Re-saved? Through a platform?
  2. Is the global brightness consistent with a real photo? Edges and texture bright, smooth areas dim?
  3. Are there any rectangular brightness boundaries that don't match the visible image?
  4. Are there any regions that are markedly dimmer than their surroundings (indicating heavier prior compression)?
  5. Does any large region look uniformly dark with no detail brightness (AI-generation tell)?
  6. What does the FFT spectrum say? ELA alone is one signal. Always cross-check.

If you got through step 1 and the answer was "screenshot" or "via Instagram," you should generally not be using ELA at all on this image. If steps 2–5 contradict each other, treat the result as ambiguous.

Worked example: the "uniformly dark" case

You upload an image. The ELA panel comes back almost entirely black with a few bright pixels at the very edges of objects.

The naive read: "Wow, almost no compression artifacts! This must be a clean original."

The forensic read: "Almost no per-pixel delta under recompression. This image either lives at exactly the quality I'm probing, was generated and saved once with no prior compression history, or has been heavily denoised. AI generation is on the candidate list. I should look at the FFT and noise residual before drawing a conclusion."

Same observation. Different conclusion. The difference is knowing what ELA can and can't say.

Try it yourself

Drop any image into our forensic dashboard and you'll see the ELA map alongside the FFT spectrum, RGB channel split, noise residual, and (when present) C2PA signature. Each panel has a one-line description of what it's measuring and how to read it.

For more on what AI-generated images look like across these signals, see The Tells of an AI Image.

The bottom line

ELA is the most-cited image forensic technique because it's visually compelling. A bright glow looks dramatic; a uniform dark looks suspicious. Both can be misread. The technique is at its best when treated as one of several signals, when the source pipeline is known, and when you've ruled out the boring explanations before reaching for the dramatic ones.