Enterprise Deployment

Governed AI at institutional scale.

For CISOs, CTOs, and compliance officers running autonomous systems that must produce cryptographic evidence for regulators, auditors, and counterparties.

Node DeploymentL3 / L4

The deployment path

Every enterprise deployment follows the same four-stage progression. Each stage is independently valuable and independently auditable.

StageMonthsWhat you getRegulatory value
Shadow Mode M1–M3 RTR operates in parallel with your existing controls. Observe, record, do not enforce. Accumulate the governance baseline without any operational impact. First canonical leaves — the governance chain starts accumulating. Each day of depth is non-reconstructible by a later entrant.
Enforcement M3–M6 RTR becomes the gate — no governed action proceeds without a passing gate result. The compliance report moves from "observation" to "enforcement" status. L2 conformance certificate issued. RTR-C001 through RTR-C006 satisfied. Regulator can verify the governance record without accessing your systems.
Horizontal expansion M6–M12 Extend governance to additional AI systems, decision types, and operational domains. Each addition adds to the shared governance chain — one audit trail, multiple systems. Single compliance report spans multiple AI systems. Simplifies DORA, EU AI Act, and Basel III documentation burden.
Cross-institution mesh M12+ Cross-witness with counterparty institutions. Settlement verification on shared actions. Insurance recognition based on governance chain depth and variance history. L4 conformance. Network effect: the mesh becomes more valuable as participants increase. A counterparty's monitor can verify your governance record without you sending them a document.

What your regulator sees

A regulator who runs a monitor node on the transparency log sees the governance record without accessing your systems. The only information they need: your log_id — the SHA-256 of the log operator's SPKI, a permanent, unforgeable identifier derived from the cryptographic key.

What they can verify (no access to your systems)

  • Every governance decision you recorded — leaf count, timestamps, subject IDs
  • Every STH — Merkle root, log signature, consistency with previous STH
  • Every inclusion proof — any specific leaf is in the log
  • Every variance record — governance gaps (BREACH / omega_breach)
  • Current lifecycle state — ACTIVE / QUARANTINE / LOCKBOX
  • Governance chain depth — days of accumulated record

What they cannot see (stays with you)

  • The content of your prompts, documents, or customer data
  • The raw evaluation datasets (only hashes)
  • The actual execution metrics (only commitments)
  • Your private keys or system architecture
  • Any data beyond what is committed in the governance record

EU AI Act regulatory sandbox positioning

If your national competent authority has established or is establishing a regulatory sandbox under EU AI Act Articles 57–63, AGTS provides the evidence layer that sandbox participants use to demonstrate governance during the sandbox period. We produce the evidence that sandboxes and regulators require.

Sandbox requirement (EU AI Act)AGTS provision
Art. 57 §5 — Testing in conditions "as close as possible to real-world conditions" AGTS governance runs in production from day one. Shadow mode means the governance is live, the enforcement is not yet. The log records are real.
Art. 58 §2(d) — Supervisory authority must "have access to relevant data and documentation" The competent authority runs a monitor node. They have access to the governance record — independently verifiable — without any active data transfer from the participant.
Art. 59 §5 — Participants must "provide the supervisory authority with access to information on the results of testing" in real time The monitor node reads the log continuously. Every governance decision — including gate failures — is visible as it happens. No batch reporting. No summarized data. The actual record.
Art. 60 §3 — "General purpose AI models with systemic risk" — enhanced oversight AGTS closed loop (Triple-Leaf Ledger) records every authorization, execution, and variance for each governed action. Systemic risk signals are visible in the variance record in real time.
Art. 63 §5 — Exit from sandbox: evidence that AI system "poses no more than acceptable risk" The governance chain depth, variance classification history, and L3 conformance certificate together constitute evidence for Art. 63 §5. The record preceded the sandbox — it continues after it.

What AGTS is — and what it isn't

We are

  • Governance evidence infrastructure — we record what was authorized, with what evidence
  • An independently verifiable transparency log — any party can verify without trusting us
  • The compliance reporting layer — six claims, 17 sub-articles, machine-readable
  • The replay layer — any decision replayable from its canonical leaf

We are not

  • A legal compliance service — we produce governance evidence; your lawyers interpret it
  • A certification body — we issue conformance certificates; accreditation bodies certify
  • A replacement for ISO 26262 / IEC 62443 / FMEA — we govern AI decisions; functional safety engineering is yours
  • An AI model evaluator — we record your evaluation results; the evaluation is yours

Talk to us

Enterprise enquiries

For Certified (L2), Transparent (L3), and Networked (L4) deployment discussions:

enterprise@obligationsign.com →

Regulatory enquiries

For national competent authorities, regulatory sandbox pilots, or regulatory affairs discussions:

regulatory@obligationsign.com →

Patent licensing

For questions about EP 25 000 039.5 and EP25209644 licensing for AGTS-conforming products:

legal@obligationsign.com →

Sovereign Authority setup

Hardware key ceremony, GrapheneOS Pixel device provisioning, and Sovereign Authority configuration:

ops@obligationsign.com →

Pricing → Compliance → Start free evaluation →