Safety-Native Embodied AI#
Safety-Native Embodied AI integrates CHERI (Capability Hardware Enhanced RISC Instructions) across the hardware and software stack. Capability pointers enforce memory safety in silicon, while fine-grained compartments isolate critical modules. Even code written in traditionally unsafe languages gains protection because CHERI restricts pointers to authorised regions and prevents lateral movement between components.
How this differs from conventional cybersecurity#
Most defences are bolted on after the fact: firewalls, antivirus, and emergency patches that race against attackers. Those reactive measures leave gaps. CHERI instead builds safety checks into the processor’s DNA:
- Hardware-enforced memory safety: Capability bounds stop buffer overflows, rogue pointers, and use-after-free flaws before they execute.
- Compartmentalisation by design: Each subsystem runs inside a hardware-backed compartment that limits blast radius and enables graceful, predictable failure modes.
- Deterministic safety: Violations become observable, contained events–triggering safe recovery instead of silent corruption.
Why now?#
CHERI is ready for deployment after more than a decade of research and engineering:
- Launched in 2010 by the University of Cambridge and SRI with DARPA support.
- Proved in silicon through the UK’s Digital Security by Design programme and Arm’s Morello prototype.
- Adopted by major collaborators such as Microsoft and Google, with commercial CHERI-enabled processors now available from companies like Codasip.
- Supported by open-source RISC-V cores, operating systems, toolchains, and active developer communities.
What this enables#
With CHERI baked into the silicon, our platform dramatically reduces risk before issues arise. Buffer overflows, dangling pointers, and module compromises are blocked by design. That makes embodied AI systems inherently safer, easier to certify, and far more trustworthy–whether they’re on the road, in a factory, or in a clinical setting.