How It Works

Learn what each analyzer checks and how to interpret the results.

  1. Digital Signature (C2PA)
  2. SynthID (Google)
  3. TinEye Reverse Search
  4. AI Metadata (EXIF)
  5. Human Consensus
Digital Signature (C2PA)

Looks for authenticated Content Credentials.

C2PA (Coalition for Content Provenance and Authenticity) is a standard for embedding provenance records -- called manifests -- directly into media files. A manifest documents where a file came from, what tools created or modified it, and who vouched for it. It's cryptographically signed, so any post-signing tampering breaks the signature.

OpenAI's image models (gpt-image-1 / 1.5), Google's Nano Banana Pro and Nano Banana 2 models (Gemini image family), Adobe Photoshop & Lightroom, and cameras from Nikon, Canon, and Leica all support C2PA signing.

Because both AI generators and cameras use C2PA, the manifest's content matters more than its presence. A manifest claiming digitalSourceType: trainedAlgorithmicMedia means the signer says the image is AI-generated; digitalSourceType: digitalCapture means the signer says a camera captured it. Trace Machine surfaces these signals for you.

What C2PA can show
  • Integrity: the image hasn't been altered since signing -- even a single changed pixel breaks the signature.
  • Claimed origin: the signer's assertions about how the image was made (AI-generated, camera-captured, edited, etc.).
  • Edit history: a chain of prior manifests ("ingredients") showing transformations over time, if all tools in the chain supported C2PA.
What C2PA cannot prove

A manifest records claims made by the signer -- it doesn't independently verify them. "Camera capture" in a manifest proves the signer said it was a camera capture, not that a camera actually took the photo. Claims are only as trustworthy as the signer.

Can you forge a C2PA manifest?

Yes. Anyone can create a cryptographically valid manifest using free, open-source tools. What a forger can't easily do is get their certificate onto a recognized trust list -- the registry of vetted signers maintained by the C2PA organization. Trace Machine checks whether the signing certificate is on a trust list and tells you the result.

Important limitation: Trust-list recognition isn't foolproof. The C2PA trust list is maintained by the C2PA organization itself, not the industry-standard CCADB infrastructure that browsers use for HTTPS. Security researchers have demonstrated forgeries that passed validation on Adobe's Content Credentials site. C2PA is useful provenance information, not a verdict of authenticity.

Manifests are signed with certificates, and certificates expire. What happens next depends on whether the signer included an independent timestamp:

  • With a TSA timestamp: A Time Stamp Authority proves the signature was created while the certificate was valid. The manifest stays verifiable indefinitely.
  • Without a TSA timestamp: Once the certificate expires, the manifest can no longer be verified. Many real-world implementations skip this step.

Trace Machine shows you which kind of timestamp (if any) a manifest has.

Important limitation: C2PA metadata lives inside the image file and is easily lost -- screenshots, metadata stripping, re-encoding, or uploading to most social platforms will erase it. The absence of a manifest tells you nothing; only its presence provides information.
C2PA demo image

Analyze this demo image to see what Trace Machine shows when it detects a C2PA manifest.

Infrared-style tree scene used as a C2PA demo image
Generated by gpt-image-1 (inside ChatGPT) with the prompt "An infrared photo of a tree on a hill."

Learn more:

SynthID (Google)

Uses Google reverse image search to check for an invisible watermark.

Uses Google reverse image search to check the image for SynthID. This is an invisible watermark embedded in anything generated or edited by Google's AI models. It's not metadata, unlike C2PA/EXIF data -- it's basically an invisible pattern in the image's actual pixels.

SynthID is impossible to add after the fact -- it gets embedded into the image as it's being generated -- and it's very hard to remove. When SynthID is detected, it's very, very strong evidence that the image was generated or edited by one of Google's AI models.

Important limitation: SynthID can be removed through very aggressive editing, and sometimes fails to register if the image is too small or compressed. Also, there's no automated way to check for a SynthID watermark -- the easiest way is to use Google reverse image search.

Learn more:

TinEye Reverse Search

Uses TinEye to find where the image exists on the internet and when it first appeared.

TinEye is a reverse image search engine. It finds where a given image exists across the entire internet. It has a strong perceptual matching algorithm that can detect crops and edits of the same image -- though it's not perfect, so manually verify by looking at the results.

Trace Machine checks results against a huge community-maintained list of known AI image generation sites; if an image is found on one of these sites, it's more likely to be AI-generated. On the other hand, if TinEye turns up that the image has been on the internet since before 2021 or 2022, it's strong evidence that the image is not AI-generated.

Important limitation: TinEye can only search public web sites -- which is for the best! The absence of results doesn't necessarily mean the image is original.

Compliance note: TinEye results are fetched on demand and are not retained by Trace Machine.


What the results mean:

  • Earliest date: The oldest appearance of this image on the interet -- probably about when the image first appeared online.
  • Found on AI sites: At least one match was found on a known AI image generation or hosting site.
  • Not on known AI sites: No matches were found on sites in the AI site list.


Learn more:

AI Metadata (EXIF)

Scans metadata for hints that common AI tools leave behind.

EXIF stands for Exchangeable Image File Format. It's a standard for bundling image data alongside a file. It's the reason you can use your photos app to see when photos were taken, what camera settings were used, and other technical details.

AI image generation enthusiasts sometimes use programs like Automatic1111 (a little dated at the time of writing) and ComfyUI to set up complex image generation prompts and workflows. Note -- these aren't AI image models, but interfaces that let users control and automate the image generation process. These programs often write their parameters into EXIF metadata.

Important limitation: It's really easy to remove EXIF data. You can do it in Photoshop, GIMP, or any number of online tools. So the absence of EXIF data doesn't prove anything.

EXIF demo image: Automatic1111

Try analyzing this demo image to see what it looks like when Trace Machine detects EXIF data.

Astronaut demo image with EXIF parameters from Automatic1111
This image was generated in Stable Diffusion using the Automatic1111 UI, which writes detailed generation parameters into EXIF fields.

EXIF demo image: ComfyUI

Try analyzing this demo image to see what it looks like when Trace Machine detects EXIF data.

Bottle demo image with EXIF parameters from ComfyUI
This image was generated in Stable Diffusion using ComfyUI, another interface that can encode prompts and parameters into EXIF metadata.

Learn more:

Human Consensus

Compares this upload to prior submissions and shows community votes.

Some our best tools for distinguishing synthetic media are our eyes and minds. Trace Machine maintains a community database of image consensus, where users can vote on whether an image is AI-generated or not.

Important limitation: The unfortunate truth is that synthetic media is a powerful tool for deception -- people may want to misrepresent an AI-generated image as real or vice versa. You're welcome to vote because an image "looks AI-generated" or "looks real," but if you're not sure, err on the side of caution and abstain from voting. Human consensus is not infallible; treat it as what it is: just one data point among many.

Options

Theme