Product flow

The AI Act project record, from first classification to reviewer handoff.

Epok is organized around the operational objects model teams already use: systems, model versions, datasets, logs, required fields, and generated documents.

The core product idea is intentionally simple: regulatory evidence should be assembled from the same technical trail that produced the model. A model version should know which datasets, training runs, evaluation metrics, deployment assumptions, and review decisions support it.

Epok keeps those objects connected so a reviewer can see what was captured automatically, what was generated from deterministic templates, and what still requires human judgement.

Evidence package

ICU deterioration model

Review

Readiness

AI Act Project

27 fields

Registry

Model Version

v2.4.1

Data governance

Dataset Card

Article 10

Runtime

Evidence Log

2.4k events

01analytics

Classify the AI system

Start with intended use, deployment context, users, outputs, and risk rationale. Epok turns that into an AI Act Project with evidence requirements.

02hub

Attach technical artifacts

Connect Model Versions, Dataset Cards, Evidence Logs, and project fields so documentation stays close to the actual lifecycle record.

03description

Generate deterministic drafts

Draft documents come from templates and stored project evidence, keeping the record auditable instead of LLM-written from scratch.

04rule

Review and export

Reviewers see captured, generated, and review-required evidence before anything becomes an approval package.

Evidence graph

A product record that can explain itself.

Instead of asking teams to recreate context in a late compliance document, Epok links every draft section back to source objects. The graph is not decorative. It is how teams answer: where did this statement come from, who reviewed it, and what is still missing?

AI Act Project
Model Version
Dataset Card
Evidence Log
Generated Documentscaptured / generated / review-required
Connected project evidence becomes draft documentation with explicit source and review state.

App cards

Screenshot-style surfaces for the evidence reviewers ask about.

Readiness

AI Act Project

27 fields

check_circleClassification rationale
check_circleEvidence source map
check_circleOpen review blockers

Registry

Model Version

v2.4.1

check_circleIntended use
check_circlePerformance summary
check_circleTraining run link

Data governance

Dataset Card

Article 10

check_circleCollection protocol
check_circleConsent basis
check_circleBias and QC notes

Runtime

Evidence Log

2.4k events

check_circleMetrics batch
check_circleDrift checks
check_circleSubgroup results