Regulatory Coverage
Framework Coverage Index
RCAN protocol provisions mapped to applicable regulatory frameworks for physical robot AI systems. Coverage classifications refer to protocol-layer technical controls only. Organizational, procedural, and regulatory obligations remain the responsibility of the provider and deployer.
EU AI Act (2024/1689)
Applies to high-risk AI systems β Annex III, Category 3(a): safety components of machinery. Application date: 2 August 2026.
| Article | Requirement | RCAN Provisions | Coverage |
|---|---|---|---|
| Art. 9 | Risk management system β identify and mitigate known and foreseeable risks across the system lifecycle | Β§16.2 confidence gates (per-scope thresholds); Β§7 ConfidenceGate; castor fria generate FRIA artifact (OpenCastor#858) | Substantial |
| Art. 12 | Record keeping β automatic logging of operational events enabling post-deployment reconstruction | Β§6 AuditChain (HMAC-SHA256 append-only, chained); Β§16.1 AI block (model identity, confidence, latency, thought_id); QuantumLink-Sim commitment chain | Full (technical) |
| Art. 13 | Transparency β deployers must be able to interpret outputs and understand system limitations | Β§16.4 thought log (GET /api/thoughts/<id>, OWNER-gated); robot-memory.md structured operational history (rcan-spec#191) | Substantial |
| Art. 14 | Human oversight β effective oversight during operation; ability to intervene, override, or halt | Β§16.3 HiTL gates (structural PENDING_AUTH β AUTHORIZE flow; cannot be bypassed by AI agent); Β§2 RBAC OWNER role enforcement; ESTOP protocol | Full (technical) |
| Art. 17 | Quality management β documented methodology, testing, performance monitoring, change management | Β§16.2 confidence gate thresholds (performance floor); Β§16.1 inference_latency_ms in every audit record; robot-memory.md confidence decay (systematic degradation monitoring) | Partial |
| Art. 26 | Deployer obligations β use system as instructed, maintain human oversight, report incidents | Β§2 RBAC LEASEE role (deployer authority boundary enforced at protocol layer; scope violations structurally impossible) | Partial |
| Art. 50 | AI-generated content marking β AI-generated outputs must be machine-detectable as AI-generated | Β§16.5 AI output watermarking β HMAC watermark token on every AI-generated COMMAND message; verification endpoint (rcan-spec#194, in progress) | In progress |
Detailed article-level mapping: docs/compliance/eu-ai-act-mapping.md β includes conformity assessment citation guidance.
NIST AI Risk Management Framework 1.0
Voluntary framework for US federal agencies and government procurement. Relevant for DoD and GSA-schedule robotics contracts.
| Function | Core Requirement | RCAN Provisions | Coverage |
|---|---|---|---|
| GOVERN | Organizational accountability, policies, and workforce capability for AI risk | Β§2 RBAC (role-scoped authority); Β§16 AI accountability provisions; L1βL4 conformance as measurable governance target | Substantial |
| MAP | Identify and characterize AI risks in deployment context | FRIA protocol Β§19 (risk entries from conformance gaps + robot-memory hardware observations); rcan-spec#195 | Partial |
| MEASURE | Analyze and assess AI risks using quantitative and qualitative methods | L1βL4 conformance test suite (quantitative pass/fail per requirement); confidence gate rejection rates; audit chain integrity verification; safety benchmarks (OpenCastor#859) | Substantial |
| MANAGE | Prioritize and address risks; communicate residual risks to stakeholders | Β§16.2β16.3 gating (risk prevention); Β§16.4 thought log (decision transparency); AuditChain (residual risk evidence); FRIA artifact (stakeholder communication) | Substantial |
Detailed alignment: docs/compliance/nist-ai-rmf-alignment.md
Additional Frameworks
ISO 10218-1:2025
PartialSafety requirements for industrial robots. RCAN provisions: Protocol 66 safety rules (15 rules across motion, force, workspace, human, thermal, electrical, software, emergency, property, privacy domains); geofencing with dead-reckoning odometry; emergency stop with callback chain.
Full alignment doc βIEC 62443
PartialIndustrial automation and control system cybersecurity. RCAN provisions: ML-DSA-65 + Ed25519 message signing; RBAC with rate limiting and session timeouts; JWT authentication; mDNS discovery with peer verification.
Full alignment doc βGDPR Article 22
PartialAutomated individual decision-making. RCAN provisions: Β§16.3 HiTL gates (human in the decision loop); Β§16.4 thought log (decision explainability); privacy-by-default sensor policy in OpenCastor (camera, microphone scope controls).
HIPAA
PartialApplicable to medical robotics (surgical, clinical support, care pathway automation). RCAN provisions: role-gated audit record access (OWNER required for reasoning field); tamper-evident chain for PHI-adjacent action logs; air-gap capable (no external network required).
ISO 42001
PartialAI management systems β organizational requirements. RCAN provisions: L1βL4 conformance levels provide measurable quality benchmarks for an AI management system's technical controls; audit chain supports post-market monitoring data infrastructure.
SIL/PLe (IEC 62061 / ISO 13849)
PartialFunctional safety for machinery. RCAN provisions: safety stop integration (agent.safety_stop flag); latency budget constraint (latency_budget_ms); Protocol 66 safety invariants provide evidence for safety function documentation.
What RCAN does not address
RCAN is a protocol specification. The following compliance requirements are organizational, procedural, or regulatory in nature and are outside the scope of any protocol: EU AI Act Art. 43 conformity assessment and CE marking; Art. 49 registration in the EU AI public database; Art. 72 post-market monitoring organizational process; Art. 9(4) human-led risk estimation for unintended uses.
RCAN provides the technical controls and audit infrastructure that support these obligations β it does not constitute the organizational process itself. For conformity assessment template guidance, see docs/compliance/conformity-assessment-template.md.