Protocol v1.2 · March 2026

The open protocol for
AI-accountable robotics

RCAN gives every robot a unique, verifiable identity — and makes every AI decision it takes auditable, signed, and provable.

The problem

When a robot causes an incident — a warehouse arm injures a worker, a delivery robot makes the wrong call — investigators hit a wall. Not because the robot failed, but because no one can prove what it did.

Which command arrived? Who authorized it? What was the AI model's confidence? Was there a human in the loop? The answers are usually scattered across proprietary logs, local storage, and undocumented formats — or simply missing.

RCAN is the protocol layer that makes those answers available and forensically defensible — before the incident happens.

Key concepts

🤖

Robot Addressing

§2

Every robot gets a globally unique URI: rcan://registry.rcan.dev/manufacturer/model/version/device-id. Like a domain name, but for physical machines. Persistent, portable, and human-readable.

🔗

Commitment Chain

§7

Every outbound action is appended to an HMAC-SHA256 chained audit log. Tamper any record, and the chain breaks. Every record is verifiable without a central server.

🛡️

AI Accountability Layer

§16

Confidence gates block actions below a threshold. Human-in-the-loop gates require token-based approval. Model identity is recorded with every decision — so you know which model made the call.

🔐

Message Signing

§9

Ed25519 keypairs sign every command at the source. The signature travels with the message, binding it to a specific key ID. Keys are registered in the robot's record.

Who maintains it

RCAN is designed and maintained by Craig Merry, with the goal of eventually transferring governance to an independent Robot Registry Foundation.

The spec is licensed CC BY 4.0 — free to implement, fork, and build on. The reference SDKs (rcan-py, rcan-ts) and the OpenCastor robot runtime are MIT licensed.

Standards engagement is underway with ISO/TC 299 WG3 (industrial robot safety) and EU harmonized standards bodies ahead of the August 2026 EU AI Act high-risk provisions deadline.

Roadmap

v1.2 Current

AI Accountability Layer (§16): confidence gates, HiTL gates, model identity, thought log. Ed25519 signing. Commitment chain.

v1.3 Next

Federated registry protocol — multiple registries that can cross-verify RRNs. Robot-to-robot authentication.

v2.0 Vision

Signed firmware manifests, supply chain attestation, ISO/TC 299 liaison, EU AI Act §16 reference implementation.

Get started in 5 minutes

Install the SDK, run your first RCAN message, and register your robot with a global RRN.