Start your governance chain
Sign in securely via Cloudflare Access. Your email is verified before a trial tenant is provisioned — your cryptographic governance identity is generated automatically.
Verified email login via one-time PIN. No password required. No credit card.
| Dashboard | REST API | JavaScript SDK | |
|---|---|---|---|
| Sign up | Click above — email or Google | POST /auto-register | POST /auto-register |
| First leaf | Automatic (60 sec) | POST /governance/evidence | agts.evidence({...}) |
| Verify | Click leaf hash in dashboard | GET /log/proof?leaf_hash=... | agts.verify(leafHash) |
| Compliance report | Compliance tab → Export | GET /governance/report/:hash | agts.report() |
| Time to first leaf | ~60 seconds | ~5 minutes | ~10 minutes |
| Code required | None | Four curl commands | ~15 lines JavaScript |
| Best for | Evaluation, compliance officers | Backend integration, CI/CD | Embedded real-time governance |
Register your account
~30 secondsSend your email address. No other information is needed. The API instantly creates a trial tenant — a self-contained governance identity with its own API key and node ID. You will get back everything you need to start submitting governance evidence.
| Field | Required | What to put here |
|---|---|---|
| required | Your email address. Used only to send your magic link. Not stored for marketing. |
What you get back
tenant_idYour organisation's identifier on the clearinghouse — prefixtn_api_keyBearer token for all subsequent API calls — prefixagts_live_— keep this privatenode_idYour governance node's cryptographic identity — a 64-char hex SHA-256 of your SPKIlog_urlThe transparency log endpoint where your leaves will be admitteddashboardDirect link to your clearinghouse dashboard with tenant context pre-loaded
Submit your first governance evidence
~2 minutesThis is the core protocol call. You are telling the clearinghouse: "I evaluated this AI system, here is the evidence across five governance gates, and I authorise this action." The clearinghouse assembles a Proof Bundle, runs the validator quorum, wraps it in a Governance Envelope, and admits the result as a canonical leaf in the transparency log.
You need to provide a result for each of the five gates (G1–G5) and four cryptographic hashes that commit to your evaluation artefacts. The gate evidence table below explains each field.
| Field | Required | What to put here |
|---|---|---|
| Top-level | ||
| subject_id | required | A unique string identifying the AI system and version being governed. Use a format like system-name:component:version. This appears in your compliance report and replay walkthrough. |
| G1 — Semantic Validity (H ≥ 0.40) | ||
| result | required | PASS or FAIL |
| confidence_interval_lower | required | Lower bound of your performance confidence interval. A number between 0 and 1. Example: 0.80 |
| confidence_interval_upper | required | Upper bound. Must be ≥ lower bound. Example: 0.95 |
| G2 — Financial Validity (C ≥ 0.40) | ||
| result | required | PASS or FAIL |
| causal_attribution | required | true if you have evidence the improvement is attributable to your change (A/B test, ablation). false if correlation only. |
| G3 — Operational Validity (E ≤ 0.60) | ||
| result | required | PASS or FAIL |
| protected_metrics | required | Object of metric names and their current values. Example: {"toxicity_score": 0.02, "refusal_rate": 0.98}. Empty object {} is accepted for initial trials. |
| G4 — Policy Admission | ||
| result | required | PASS or FAIL |
| evidence_class | required | How your evaluation was produced: INSTRUMENTED (internal audit log — use this for trials), HOOKED (external harness), or ATTESTED (secure enclave). |
| G5 — Cryptographic Finalization | ||
| result | required | PASS or FAIL |
| operator_id | required | The person or system role authorising this action. Your email address is fine. This is recorded in the canonical leaf and appears in the compliance report. |
| Evidence hashes — four SHA-256 commitments to your evaluation artefacts | ||
| dataset_provenance_hash | required | SHA-256 of your evaluation dataset, test suite, or prompt template. Run sha256sum your-eval-dataset.jsonl in your terminal. This proves which data you evaluated against. |
| evaluation_trace_hash | required | SHA-256 of your full evaluation output log. Run sha256sum your-eval-results.log. This proves what results you saw. |
| ablation_execution_log_hash | required | SHA-256 of your A/B test or ablation log. If you have no ablation study yet, hash your safety test output file. |
| capability_certificate_hash | required | SHA-256 of your model card, benchmark result, or capability specification document. This commits to the model version being governed. |
Generating the hashes on the command line
On macOS: shasum -a 256 your-file.txt
On Linux: sha256sum your-file.txt
The hash is the first 64 characters of the output. These are commitments — you keep the originals. The hashes prove the artefacts existed at governance time.
What you get back
leaf_indexPosition of this leaf in the transparency log (integer, starts at 0)leaf_hashThe canonical leaf hash — save this, you use it to verify and replay this decisionartifact_hashHash of the full governance envelope — use this to retrieve your compliance reportsth.tree_sizeCurrent size of the Merkle tree after your leaf was admittedsth.root_hashCurrent Merkle root — any monitor can verify this independentlysth.log_signatureLog operator's ECDSA signature over the tree head — cryptographic proof of admission
Verify your leaf is in the log
~30 secondsUse the leaf_hash you received in Step 2 to request a Merkle inclusion proof from the transparency log. The proof is a list of sibling hashes you recompute locally — no trust in ObligationSign is required. You can also paste the leaf hash into the browser verify page and the proof is computed in your browser with WebCrypto.
| Field | Required | What to put here |
|---|---|---|
| leaf_hash | required | The leaf_hash string from the Step 2 response. 64-character hex string. |
What you get back
audit_pathArray of sibling hashes from your leaf to the Merkle root — recompute locally to verifytree_sizeThe tree size the proof is valid forroot_hashThe Merkle root — must match the STH you received in Step 2
Retrieve your compliance report
~30 secondsUse the artifact_hash from Step 2 to retrieve your AGTS_COMPLIANCE_REPORT_V1. The report maps your governance record to six regulatory claims (covering EU AI Act, DORA, Basel III, and ISO 42001). On the free tier the report is generated and viewable — export to JSON or Markdown requires L2.
| Field | Required | What to put here |
|---|---|---|
| artifact_hash | required | The artifact_hash string from the Step 2 response. This is in the URL path, not a query parameter. |
| Authorization | required | Your API key from Step 1, formatted as Bearer agts_live_... |
What you get back
claims[0..5]Six regulatory compliance claims, each with status, mapped articles, and the gate evidence that satisfies itleaf_hashBack-reference to the canonical leaf this report coversgenerated_atISO timestamp of report generationconformance_levelL1 on trial — upgrades to L2+ with paid tier
Register your account and get your API key
~30 secondsSame as the REST API path. Send your email address to get a tenant_id, api_key, and node_id. You can do this via the sign-up form at the top of this page, or via curl.
Add the SDK to your page
~2 minutesDrop a single script tag into your HTML page. No npm, no build step, no dependencies. The SDK loads the protocol client and exposes an AGTS.Client constructor globally.
| Config field | Required | What to put here |
|---|---|---|
| apiKey | required | Your API key from Step 1. Prefix agts_live_... |
| subjectId | required | A string identifying the AI system being governed. Use the same format as the REST API: system-name:component:version |
Wrap your AI inference call with governance evidence
~5 minutesBefore calling your AI model, collect the five gate results and call agts.evidence(). This assembles a Proof Bundle, submits it to the validator quorum, and admits the result as a canonical leaf — all before your inference call returns. The leaf hash is returned so you can attach it to the response for downstream verification.
The gate fields are identical to the REST API. See the gate evidence table below for field-by-field explanations.
What evidence() returns
leaf_hashCanonical leaf hash — include in your API response for clients to verifycompliance_urlDirect URL to the compliance report for this governance decisionleaf_indexPosition in the transparency log
Every governance submission — REST API or SDK — requires a result for all five gates. This table explains each field.
| Gate | What it represents | Minimum fields | How to satisfy it |
|---|---|---|---|
| G1 Semantic Validity | H (entropy) ≥ 0.40 — semantic diversity firewall against monoculture collapse | result, confidence_interval_lower, confidence_interval_upper |
Run your evaluation suite. Bootstrap or cross-validate. Report the CI bounds (0–1 range). |
| G2 Financial Validity | C (compliance) ≥ 0.40 — financial and regulatory compliance threshold | result, causal_attribution (bool) |
A/B test, ablation study, or before/after comparison that isolates the change. |
| G3 Operational Validity | E (energy) ≤ 0.60 — bounds execution entropy, ensures no protected metric regresses | result, protected_metrics (object) |
Run your regression test suite. Report the value for each metric you protect (e.g. toxicity, refusal rate). |
| G4 Policy Admission | Classification of how your evaluation evidence was produced and its trustworthiness | result, evidence_class |
INSTRUMENTED = internal audit log (use for trials). HOOKED = external harness. ATTESTED = secure enclave. |
| G5 Cryptographic Finalization | Ed25519 signature and Merkle inclusion proof commit the decision to the append-only log | result, operator_id |
The individual or system role that approved the deployment or action. Recorded immutably in the canonical leaf. |
These SHA-256 hashes commit to your evaluation artefacts at governance time. You keep the originals. If the decision is ever challenged, you produce the originals and anyone can recompute the hashes to verify they match the canonical leaf.
| Hash field | Commits to | How to generate |
|---|---|---|
dataset_provenance_hash | Your evaluation dataset, test suite, or task distribution | sha256sum eval-dataset.jsonl |
evaluation_trace_hash | The full evaluation output log — what your system produced during testing | sha256sum eval-output.log |
ablation_execution_log_hash | Your ablation study or A/B test execution record | sha256sum ablation-log.json |
capability_certificate_hash | Your model card, benchmark result, or capability specification | sha256sum model-card.pdf |
For a chatbot — simplest valid case
Hash your prompt template file, your safety test output file (run twice for the ablation if you have no A/B test), and your model's README or spec sheet. These are four file hashes — one command each. The hashes prove those artefacts existed at governance time without uploading the files anywhere.
| Capability | Free (L1) | Requires paid tier |
|---|---|---|
| Proof bundle generation (ECDSA P-256) | ✓ | — |
| Canonical leaves in shared log | ✓ | — |
| Merkle inclusion proof | ✓ | — |
| Compliance report (6 claims, 17 sub-articles) | ✓ view only | Export (JSON/MD) requires L2 |
| REST API + webhooks | ✓ | — |
| JavaScript SDK | ✓ | — |
| Replay (gate-by-gate walkthrough) | ✓ own leaves | — |
| External validator quorum (3-of-4) | ✗ | L2+ |
| Sovereign Authority signature | ✗ | L2+ |
| Closed-loop (triple-leaf) | ✗ | L2+ |
| Witness countersignature on STH | ✗ | L4 |
| Insurance recognition | ✗ | L3/L4 |