About Could This Be True?
A forensic AI-detection tool. We don't just give you a number — we show you the evidence.
Why this exists
Every major AI-detection product outputs a single number — “87% AI” — and asks you to trust it. We don't.
Real forensic work shows its evidence. A pathologist doesn't hand you a verdict; they walk you through the slides. A photo forensics expert doesn't say “edited” — they show you the splice, the compression mismatch, the noise inconsistency. That's the model we built around: every signal is visualized, every decision is auditable.
What we run
- Text: burstiness, lexical tics, n-gram repetition, sentence-start variety, punctuation patterns.
- Image: Error Level Analysis, FFT magnitude spectrum, RGB channel decomposition, noise residual (PRNU proxy), C2PA signature verification.
- Audio: spectrogram analysis, formant patterns, vocoder artifacts (coming soon).
- Video: frame consistency, optical-flow flicker, face-landmark stability, lip-sync drift (coming soon).
What we don't do
- Sell the bypass. We will never sell a tool that helps content evade detection. The conflict of interest would destroy the entire product.
- Pretend to be a polygraph. Detection is evidence, not proof. False positives have caused real harm — students wrongly accused, photographers wrongly flagged. Treat every verdict as the start of an investigation, not the end.
- Upload your data unnecessarily. Most checks run in your browser. Nothing leaves your machine unless you explicitly choose a server-side option.
What's next
- Audio detection (spectrogram + vocoder fingerprint).
- Video detection (face-landmark stability, lip-sync drift).
- C2PA signature reading on every image.
- API access for fraud / moderation / journalism workflows.
Get in touch
We're actively looking for design partners in fraud investigation, journalism, insurance, and content moderation. If forensic explainability matters to your workflow, we want to talk.