AGIACC is an early-stage company focused on one hard problem: how to make AI systems trustworthy when they control devices, infrastructure, and real-world operations. We build security at every layer of the stack — from software runtime hardening that deploys today, through confidential computing (TEE-based protection for AI workloads), to hardware capability architectures (CHERI) that provide deterministic guarantees no software can match.
Our approach spans the full protection spectrum:
- SafeClaw — Our software runtime layer that makes AI agent deployments (like OpenClaw) immediately safer through plugin isolation, prompt injection detection, and tamper-evident audit logging
- Confidential AI computing — TEE-based protection using Intel TDX, AMD SEV-SNP, and NVIDIA Confidential Computing to keep model weights and data encrypted even during processing
- CHERI capability architecture — Hardware-enforced memory safety and compartmentalization that eliminates entire vulnerability classes at the silicon level
We build on proven foundations: the CHERI Alliance ecosystem (Arm Morello, CHERIoT, RISC-V), NVIDIA Confidential Computing, Intel Trust Authority, the UK Digital Security by Design programme, and extensive academic research. Our contribution is to compose these technologies into a deployable, commercially viable AI security stack.
For investors: AGIACC builds in the part of the AI stack that becomes more valuable as systems gain autonomy, physical form, and regulatory exposure. SafeClaw provides immediate market entry while hardware capabilities create a durable competitive moat. Trusted infrastructure is a platform layer, not a feature.
The Problem: A Losing Arms Race#
The current approach to cybersecurity — monitoring for vulnerabilities, detecting exploits, issuing patches — is fundamentally unsustainable. With over 2.8 trillion lines of code in existence today and the relentless discovery of new vulnerabilities, reactive defense cannot keep pace.
The facts are stark:
- 70% of all security vulnerabilities stem from memory safety issues (US Department of Defense).
- AI security incidents grew 21.8× between 2022 and 2024 (OECD AIM).
- Software tools like AddressSanitizer detect many errors but impose 2× performance overhead — unacceptable for real-time AI systems.
- ARM’s Memory Tagging Extension (MTE) offers lower cost but only probabilistic protection — bypassable through secret leakage.
The UK government, the US White House, and the CHERI Alliance have all recognized that the only sustainable path forward is hardware-enforced, deterministic memory safety — fixing the foundations, not the symptoms.
Our approach: a full-stack protection spectrum#
Our product direction is safety-native: design systems so that critical boundaries are enforced at the right level of the stack — software where it’s sufficient, hardware where it’s essential. We deliver security that scales from immediate deployability to architectural guarantees.
Layer 1: SafeClaw — Software runtime hardening (available now)#
SafeClaw is our secure runtime layer for AI agent systems like OpenClaw. It provides immediate protection without requiring hardware changes:
- Plugin isolation — Each agent tool runs in a sandboxed environment with strict resource boundaries
- Prompt injection detection — ML-based classifiers and heuristic rules identify instruction-like content in data inputs
- PII redaction — Sensitive data is automatically removed from agent outputs and logs
- Human approval gates — High-risk actions require operator authorization before execution
- Tamper-evident audit logging — Integrity-protected records of all agent actions for forensic analysis and compliance
Layer 2: Confidential AI computing (deploying on TEE infrastructure)#
Trusted Execution Environments protect AI workloads from infrastructure-level threats:
- CPU TEEs (Intel TDX, AMD SEV-SNP) — Encrypted memory isolation that prevents even host OS and hypervisor access to AI workload data
- GPU TEEs (NVIDIA Confidential Computing) — Model weights stay encrypted during GPU-accelerated inference and training on H100, H200, and B200 hardware
- Remote attestation — Cryptographic verification that workloads run in genuine, unmodified TEE environments. Intel Trust Authority and NVIDIA Remote Attestation provide independent verification
- Secure key release — Encryption keys only released to workloads that pass attestation, ensuring credentials and model weights are never exposed on unauthorized systems
Layer 3: CHERI capability architecture (hardware roadmap)#
Where we design or harden stacks, we deploy CHERI-style capability architectures for deterministic security:
- Capability pointers — Every memory reference becomes an unforgeable, bounded token. Buffer overflows, use-after-free, and pointer injection are stopped at the hardware level with near-zero overhead
- Fine-grained compartmentalization — AI subsystems (inference, plugins, control loops) run in hardware-enforced compartments. Breaches physically cannot propagate between compartments
- Built-in cryptographic engines — Hardware crypto modules and Trusted Cryptography Modules (TCM) establish root of trust before OS boot
- Trusted boot and firmware measurement — Verified boot chains and attestation for robotics, automotive, and industrial deployments
Full-lifecycle AI security (roadmap)#
Our long-term scope spans the AI lifecycle:
- Data Security — Privacy-Enhancing Technologies (PETs), TEE-based data sandboxes, and hardware-enforced encryption protect training data and telemetry from extraction
- Model Security — TEE-protected inference prevents model theft; hardware-native encryption guards against adversarial attacks; CHERI compartmentalization isolates model serving from untrusted code
- Application Flow Control — Untrusted data streams are bounded by capability limits and TEE boundaries, ensuring inputs cannot escape their allocated scope even if controlling software is compromised
Compatibility: Minimal Disruption, Maximum Impact#
A critical advantage of CHERI is its compatibility with existing software ecosystems:
- University of Cambridge researchers ported 6 million lines of C/C++ code to CHERI with modifications to only 0.026% of source lines.
- Thales, in their RESAuto automotive security project, required only 1–2 code changes across 2.5 million lines of safety-critical code.
- CHERI supports both pure-capability mode (all pointers become capabilities) and legacy mode (standard code runs alongside), enabling incremental adoption.
Performance impact is minimal — Codasip’s CHERI-RISC-V core demonstrated that CHERI provides substantial security benefits with negligible performance overhead, and in some cases can even replace coarser, more power-hungry protection units.
Why work with AGIACC?#
Reactive patching alone cannot secure long-lived, physically deployed AI. When software flaws meet robots, vehicles, or plant floor controllers, the failure mode is often physical, not just reputational.
We deliver a complete protection spectrum — not just one layer, but a composable stack that meets organizations where they are today and grows with their needs:
- Immediately deployable — SafeClaw provides software runtime protection for AI agents today, with no hardware requirements
- Cryptographically verifiable — Confidential computing provides hardware-attested proof of workload integrity, satisfying the most demanding compliance requirements
- Architecturally secure — CHERI capability hardware eliminates entire vulnerability classes at the silicon level, providing guarantees no software layer can match
- Regulatory-ready — Aligned with the EU AI Act, China’s AI Computing Platform Security Framework, UK Digital Security by Design, NIST AI RMF, and industry standards (ISO 26262, IEC 62443, IEC 62304)
- Commercially viable — Minimal code changes, minimal performance overhead, massive attack surface reduction
For enterprise customers: clear paths from immediate software hardening through confidential computing to hardware-enforced security, with pilots and integration support at each stage.
For investors: exposure to a platform layer of the AI stack that becomes more valuable as autonomy moves into regulated, safety-critical, and physically deployed environments. SafeClaw provides near-term revenue while hardware capabilities build long-term moat.