TrainedOn AI

Has image-generation AI been trained on your images?

TrainedOn AI checks whether image-generation models may have been trained on your portfolio and returns a clear, review-ready evidence report.

Early access. Non-binding intake. We reply within 48h with a short walkthrough and next steps. No uploads unless scope, retention, and deletion are confirmed. Current support covers open-source Stable Diffusion models, with primary focus on SDXL.

Who It Is For

Built for creators, rights teams, and image libraries

Portfolio-level evidence for creators, rights teams, and image libraries. Beyond dataset lookups, we assess model behavior to produce indicators you can use in internal decision-making.

Who uses it

  • Creators and photographers (portfolio checks)
  • Agencies, archives, and rights-holders (batch reviews)
  • Rights, legal, and policy teams

What they need

  • Signals from model behavior, not just dataset match lists
  • Collection-level indicators, including inconclusive outcomes
  • Shareable evidence summaries and artifact outputs
  • Confidential handling with NDA and retention controls

Product Demo

Product Interface & Proof of Work

Preview of interface surfaces and evidence views. The walkthrough video appears here once finalized.

Recorded product walkthrough
Replace this with your final demo take when ready.

Portfolio upload flow

Drag-and-drop intake with clear file handling and status cues.

Result evidence view

Batch indicators, caveats, and report artifacts for review.

Case bookkeeping

Structured records for documentation, reporting, and follow-up.

Context

Why this is hard to know today

Dataset indexes can show whether files appear in known corpora, but they cannot by themselves show whether a specific model learned from a portfolio.

What the world looks like now

  • Most major models do not publish complete training datasets, and provenance is hard to audit.
  • Images can be scraped, mirrored, resized, compressed, or watermarked, complicating attribution.
  • Copyright and licensing questions are actively contested in public debate and litigation.

When people care

  • You publish, license, or represent image portfolios (creators, agencies, archives, rights orgs).
  • You need output usable for internal review or counsel.
  • You want to prioritize where to investigate, document, or take next steps.

Workflow

How the review flow works

From intake to report, the flow turns portfolio uploads into clear indicators and caveats for review.

1. Intake

Upload portfolio

Drag a folder or multi-select files. Your portfolio stays saved for later reruns.

Batch upload Persistent portfolio

2. Setup

Choose review mode

Start with the default automated setup, with advanced configuration available when needed.

Auto default Advanced options

3. Run

Analysis run executes

The job runs through queued and running states and ends as complete, failed, or inconclusive.

Queued -> running Retry-safe

4. Evidence

Review and export

See summary metrics, per-image rows, variation bookkeeping, and downloadable artifacts.

Evidence table CSV/PDF/JSON
  1. Sign in and create your workspace Start with email or SSO and create a workspace for your portfolios and runs.
  2. Upload a portfolio once Add files in bulk and keep them available for reruns, comparisons, and bookkeeping.
  3. Start analysis with default settings Run with the standard setup first, and adjust advanced settings only if needed.
  4. Review outcomes with reasons Each run includes clear status, confidence tier, and explicit reason codes for failed or inconclusive outcomes.
  5. Export artifacts and decide next step Download reports, review variation records, and rerun or escalate based on the evidence.

Pricing

Early-access indicative pricing. Final quote is confirmed after we review scope.

Early Access

Starter

EUR/USD20

First 100 images

  • One portfolio review setup
  • Sample evidence and recommendations
  • Standard reporting

Flexible

Scale

EUR/USD0.10

Per image (101-1,000)

  • Scalable evidence signals
  • Batch upload handling
  • Faster turnaround options

Enterprise

Custom

FromCustom

1,001+ images + org workflows

  • Custom engagement and retention terms
  • NDA and procurement support
  • Advanced reporting and enterprise support

Scope-first pricing policy

This page is intake-first. We confirm segment (creator vs business), model coverage, and final quote before any uploads.

Questions on enterprise or custom workflow? Email support@trainedonai.com.

FAQ

Is this definitive proof?

No. We provide statistical indicators and a cautious interpretation. Some cases will be inconclusive.

How is this different from LAION or dataset search tools?

Search tools check whether images appear in known datasets. TrainedOn AI complements that by testing model behavior against your portfolio to produce probabilistic indicators.

Which models are supported?

At this stage we support open-source models only, specifically the Stable Diffusion family with a focus on SDXL. Commercial black-box models are not included yet, and model coverage is expanding over time.

What do I receive?

You receive a concise report with findings, summary statistics, caveats, and test setup details.

Do you need private training data?

No. We evaluate model behavior through controlled assessments. Results are probabilistic indicators and should be interpreted cautiously.

How do you handle confidentiality?

We treat uploads as confidential, can work under NDA, and align on retention and deletion expectations up front.

Do I need to upload images now?

No. Start with non-binding intake. We only request uploads after scope and confidentiality are agreed.

How long does a check take?

Typical turnaround depends on portfolio size and model availability. We share an estimate after intake.

Is this legal advice?

No. This is informational evidence output, not legal advice. Consult counsel for legal conclusions.

Why We Started TrainedOn AI

Based in Norway, TrainedOn AI builds portfolio-level analysis for creators, agencies, and rights-holders.

When image-generation models began producing work that closely resembled specific artists and photographers, a critical question emerged: had those models been trained on those creators' portfolios?

We founded TrainedOn AI to address that gap. Our mission is to provide creators, agencies, and rights-holders with controlled workflows that produce measurable indicators for business, licensing, and legal conversations. Our current production scope is open-source models in the Stable Diffusion family, with focus on SDXL. Based in Norway, we operate with creator confidentiality first, including NDA-backed processes and privacy-aligned retention policies, while gradually expanding model support.

Erik Ma
Erik Ma
Founding team

Direct contact: support@trainedonai.com.

Follow updates on LinkedIn.