Beta Governed AI Execution

Let AI take
real actions.

With control, review, and proof.

Every decision is evaluated through five governance gates, optionally approved by humans, and cryptographically recorded in a Merkle transparency tree. No infrastructure to operate — governance handled by the protocol.

OpenClaw Starter is not a chatbot. It is a governed execution system for AI agents.

01 — What You Can Do

Real scenarios.
Real governance.

Deploy Safely

Agent evaluates deployment risk. Low risk deploys automatically. High risk requires human approval. Dangerous changes are blocked.

ADMIT → deploy automatically
REVIEW → require approval
REFUSE → block deployment
Rotate Secrets

Agent proposes key rotation. Governance requires approval. Execution is logged with cryptographic proof. Full audit trail.

1. Agent proposes rotation
2. G5 requires human sign-off
3. Execution recorded with proof
Automate Compliance

Scheduled agent scans systems on a cadence. Only alerts when governance detects risk. Every scan decision is provable and replayable.

Scheduled scans via SchedulerDO
Risk-tiered governance per action
Merkle-anchored audit trail

02 — The Agent Loop

Every action.
Governed end-to-end.

1
Intent
User or schedule
2
Agent
LLM reasoning
3
Tool
Proposed action
4
Governance
G1–G5 evaluation
5
Review?
Human approval
6
Execution
Action runs
7
Proof
Merkle anchor
ADMIT

Action passes all gates. Executes automatically. Proof recorded.

REVIEW

Action requires human approval. Agent suspends. Resumes on approve or terminates on deny.

REFUSE

Action blocked by governance. Never executes. Refusal recorded with proof chain.

03 — Five-Gate Governance

Five gates.
Every action.

G1
Statistical Confidence
H ≥ 0.40
Is the model confident enough to act?
G2
Causal Attribution
C ≥ 0.40
Can the action be traced to a cause?
G3
Regression Safety
E ≤ 0.60
Will this action make things worse?
G4
Evidence Integrity
SHA-256
Is the evidence chain intact?
G5
Human Authorization
Ed25519
Does a human need to approve?

All tiers use identical five-gate governance. The same cryptographic proof chain. The same transparency log.

04 — Why This Matters

AI can act.
But should it?

Without governance

AI agents act unpredictably. No audit trail. No review step. No proof that a decision was safe when it was made.

× Actions execute without evaluation
× No way to require human approval
× No cryptographic record of what happened
With OpenClaw

Every AI action is controlled, reviewable, and provable. You decide what runs automatically, what needs approval, and what is blocked.

Every action evaluated through G1–G5
Human-in-the-loop when governance requires it
Merkle-anchored proof for every decision
vs Chatbots

Chatbots generate text. OpenClaw governs real execution with proof chains.

vs Agent Frameworks

Frameworks let agents act. OpenClaw ensures every action is evaluated before it runs.

vs Guardrails

Guardrails filter output. OpenClaw governs the execution itself with cryptographic accountability.

05 — Three-Tier LLM Routing

Choose your model.
Keep your governance.

Free Tier

Workers AI

Llama 3.1 8B inference at the edge, included free. No API keys needed. Every response governed through the five-gate pipeline.

@cf/meta/llama-3.1-8b-instruct
BYOK

AI Gateway

Bring your own API keys for OpenAI, Anthropic, or Google. Routed through Cloudflare AI Gateway with sovereign headers for governance inspection.

AES-GCM encrypted key vault
Enterprise

Managed

ObligationSign-provisioned keys with premium model access. Full SLA, dedicated support, unlimited governance events.

Contact sales for pricing

06 — Sovereign Headers

Every API call
carries its proof.

When OpenClaw Starter routes through the AI Gateway, every outbound request includes sovereign headers — the governance commitment hash, agent ID, and governance mode. This means the AI provider receives cryptographic context proving the request was governed before it was sent.

The same headers are used for BYOK and Managed tiers. The governance record is anchored before the LLM call is made.

Sovereign Headers
X-AGTS-Commitment-Hash: sha256:abc...
X-AGTS-Sovereign-Header: True
X-AGTS-Agent-Id: agent-xyz
X-AGTS-Governance-Mode: transparent

07 — Edge Architecture

100% edge.
Zero servers.

Workers
API + routing
Durable Objects
Agent + chat state
KV + R2
Keys, files, vault
AI Gateway
LLM routing
Starter
500
governed events / month
Bundled. No per-event billing.
Pro
Usage
pay-per-governance-event
Risk-tiered pricing per action.
Enterprise
unlimited + SLA
Volume commitment. Dedicated support.

08 — Quick Start

Three steps.
Governed AI.

01

Create an Agent

POST /v1/agents
{ "name": "my-agent" }
02

Connect via WebSocket

GET /v1/agents/:id/chat
Upgrade: websocket
03

Verify Governance

GET /v1/agents/:id/proofs/:hash
→ { leaf_hash, verdict }

09 — Get Started

Make AI safe to act in the real world.

Every action controlled. Every decision reviewable. Every outcome provable.