Skip to main content
Blog

Blog

Research-led writing on the security foundations that AI systems need — from software runtime hardening through confidential computing to hardware capability architectures.

Foundations of AI Runtime Security

·1024 words·5 mins
The new threat surface # AI systems have moved beyond text generation. Modern AI agents control hardware, execute shell commands, access files, browse the web, and integrate with messaging platforms. This expansion of capability is simultaneously an expansion of attack surface.

Confidential AI Computing: Protecting Models and Data in Use

·1233 words·6 mins
The gap in AI data protection # Encryption protects data at rest (on disk) and in transit (over networks). But AI workloads must decrypt data to process it — creating a vulnerability window where models, training data, and inference inputs exist unencrypted in memory. This “data in use” gap is the target of confidential computing.

From OpenClaw to SafeClaw: Securing AI Agent Runtimes

·1235 words·6 mins
What OpenClaw taught us # OpenClaw represents a new class of AI — agents that don’t just generate text but actively control hardware, execute shell commands, read and write files, browse the web, and integrate with messaging platforms like Slack, WhatsApp, and email. By giving AI “hands,” OpenClaw demonstrated the future of accessible, embodied AI.