Product flow
The AI Act project record, from first classification to reviewer handoff.
Epok is organized around the operational objects model teams already use: systems, model versions, datasets, logs, required fields, and generated documents.
The core product idea is intentionally simple: regulatory evidence should be assembled from the same technical trail that produced the model. A model version should know which datasets, training runs, evaluation metrics, deployment assumptions, and review decisions support it.
Epok keeps those objects connected so a reviewer can see what was captured automatically, what was generated from deterministic templates, and what still requires human judgement.
Evidence package
ICU deterioration model
Readiness
AI Act Project
27 fields
Registry
Model Version
v2.4.1
Data governance
Dataset Card
Article 10
Runtime
Evidence Log
2.4k events
Classify the AI system
Start with intended use, deployment context, users, outputs, and risk rationale. Epok turns that into an AI Act Project with evidence requirements.
Attach technical artifacts
Connect Model Versions, Dataset Cards, Evidence Logs, and project fields so documentation stays close to the actual lifecycle record.
Generate deterministic drafts
Draft documents come from templates and stored project evidence, keeping the record auditable instead of LLM-written from scratch.
Review and export
Reviewers see captured, generated, and review-required evidence before anything becomes an approval package.
Evidence graph
A product record that can explain itself.
Instead of asking teams to recreate context in a late compliance document, Epok links every draft section back to source objects. The graph is not decorative. It is how teams answer: where did this statement come from, who reviewed it, and what is still missing?
App cards
Screenshot-style surfaces for the evidence reviewers ask about.
Readiness
AI Act Project
27 fields
Registry
Model Version
v2.4.1
Data governance
Dataset Card
Article 10
Runtime
Evidence Log
2.4k events