Skip to main content
About AGIACC

About AGIACC

5 mins

AGIACC is an early-stage company focused on one hard problem: how to make AI systems trustworthy when they control devices, infrastructure, and real-world operations. We do not claim silicon solves every AI risk, but we do believe the physics of the processor is the right place to anchor trust when models gain actuators, network reach, and operational authority.

We build on what already works: CHERI-class capability architectures and the surrounding ecosystem (Arm Morello-class paths, CHERIoT, RISC-V capability work, UK Digital Security by Design, and the CHERI Alliance) provide deterministic memory safety and compartmentalisation that software-only sandboxes cannot equal. We invest in what comes next: research into AI-specific threat models, lifecycle integrity (what we sometimes summarise as a hardware-aware “AI BOM”), and ways to compose emerging hardware mitigations where they make sense.

This page outlines how we think about the problem, where we see commercial opportunity, and how we aim to help organisations protect embodied AI without over-claiming maturity we have not yet earned.

Founder-facing summary: AGIACC is building in a part of the AI stack that becomes more valuable as systems gain autonomy, regulatory exposure, and real-world consequences. We believe trusted infrastructure will be a durable layer of the market, not a temporary feature.


The Problem: A Losing Arms Race
#

The current approach to cybersecurity — monitoring for vulnerabilities, detecting exploits, issuing patches — is fundamentally unsustainable. With over 2.8 trillion lines of code in existence today and the relentless discovery of new vulnerabilities, reactive defense cannot keep pace.

The facts are stark:

  • 70% of all security vulnerabilities stem from memory safety issues (US Department of Defense).
  • AI security incidents grew 21.8× between 2022 and 2024 (OECD AIM).
  • Software tools like AddressSanitizer detect many errors but impose 2× performance overhead — unacceptable for real-time AI systems.
  • ARM’s Memory Tagging Extension (MTE) offers lower cost but only probabilistic protection — bypassable through secret leakage.

The UK government, the US White House, and the CHERI Alliance have all recognized that the only sustainable path forward is hardware-enforced, deterministic memory safety — fixing the foundations, not the symptoms.


Our approach: build on capability hardware, then specialise for AI
#

Our product direction is safety-native: design systems so that critical boundaries are enforced by the ISA and microarchitecture, not only by policies we hope are bug-free. Concretely, that means adopting CHERI-style capability pointers and compartment models as the default substrate where we control the stack — and layering AI-aware policies and attestation on top.

Chip-level root of trust (ecosystem + our work)
#

We align with CPU and firmware patterns that establish measurement and keys before untrusted OS code runs — the same philosophy embodied in DSbD demonstrators and CHERI platforms. In engagements and R&D, we focus on:

  • CHERI Capability Pointers — Every memory reference becomes an unforgeable token of authority. Each capability encapsulates the address, permissible bounds, and specific permissions (read, write, execute) — all enforced by hardware on every access. Buffer overflows, use-after-free, and pointer injection are stopped deterministically, not probabilistically.
  • Built-In Cryptographic Engines — Hardware password engines and Trusted Cryptography Modules (TCM) are initialized before the OS boot sequence, establishing a root of trust that cascades upward through every software layer.
  • Trusted boot and firmware measurement — Verified boot chains, attestation, and operational monitoring patterns suited to robotics and automotive program needs (often involving secure BMC / platform firmware in industrial designs).

Compartmentalised AI computing (target architecture)
#

Where we design or harden stacks, we aim to separate trusted execution and rich environments along CHERI capability boundaries — not only coarse virtual machines:

  • Hardware-backed isolation — Training, inference, tooling, and real-time pipelines can be placed in separate compartments so a flaw in one layer faces an architectural barrier before it reaches actuators or safety logic.
  • Fine-grained access control — Capabilities allow module-level policies that are difficult to express cleanly through MMU pages alone, with cost profiles suited to edge deployment when silicon is available.

Full-lifecycle AI security (roadmap)
#

Our long-term scope spans the AI lifecycle — recognising that much of this is research and integration, not a single shipped SKU today:

  • Data Security — Privacy-Enhancing Technologies (PETs), data-safe sandbox models, and hardware-enforced encryption protect sensitive training data and telemetry from extraction.
  • Model Security — Hardware-native encryption guards against prompt injection, adversarial data poisoning, and unauthorized model extraction — with cryptographic operations accelerated directly on the compute platform.
  • Application Flow Control — Untrusted data streams are physically bounded by capability limits. Inputs cannot circumvent their allocated buffer bounds, even if the controlling software is compromised.

Compatibility: Minimal Disruption, Maximum Impact
#

A critical advantage of CHERI is its compatibility with existing software ecosystems:

  • University of Cambridge researchers ported 6 million lines of C/C++ code to CHERI with modifications to only 0.026% of source lines.
  • Thales, in their RESAuto automotive security project, required only 1–2 code changes across 2.5 million lines of safety-critical code.
  • CHERI supports both pure-capability mode (all pointers become capabilities) and legacy mode (standard code runs alongside), enabling incremental adoption.

Performance impact is minimal — Codasip’s CHERI-RISC-V core demonstrated that CHERI provides substantial security benefits with negligible performance overhead, and in some cases can even replace coarser, more power-hungry protection units.


Why work with AGIACC?
#

Reactive patching alone cannot secure long-lived, physically deployed AI. When software flaws meet robots, vehicles, or plant floor controllers, the failure mode is often physical, not just reputational.

We are organised around a safety-native mindset: use capability hardware where it is available, be honest about trade-offs and maturity, and collaborate with research and standards communities so AI security does not repeat the “move fast, fence later” mistakes of the pure-software era. We aim to help partners with infrastructure that is:

  • Inherently safer — deterministic hardware prevention, not probabilistic detection
  • Fully auditable — every memory access carries verifiable provenance
  • Regulatory-ready — aligned with national standards including China’s AI Computing Platform Security Framework, UK Digital Security by Design, and EU AI Act requirements
  • Commercially viable — minimal code changes, minimal performance overhead, massive attack surface reduction

For customers, that means clearer paths to pilots and assurance. For strategic investors, it means exposure to a layer of the AI stack that becomes more valuable as autonomy moves into regulated and safety-critical environments.