AI Security · Developer Tooling · Open Source
I build tools that make security practical. Most of my work lives at the intersection of AI systems and the security gaps nobody's filling - agent security, supply chain integrity, configuration hardening. I write about these topics on Substack and build the tooling I wish existed.
Previously Head of AI Security at Aon. CISSP, AIGP.
|
agentscan - Map the attack surface of every AI coding agent on your machine. Enumerates Claude, Cursor, Windsurf, Copilot, and more. Cross-agent permission analysis. |
agent-security-patterns - Platform-agnostic threat model for autonomous AI agents. 32 threats, 12 defense patterns, zero-trust architecture. |
|
injectguard - Offline prompt injection scanner. 19 detection rules, risk scoring, no API keys required. |
rai-framework - Practical Responsible AI framework with risk tiers, lifecycle gates, and worked examples for classical ML and GenAI. |
|
dockaudit - Dockerfile security auditor. 27 rules, secret detection, A-F grading. Zero dependencies. |
codemap - Intelligent codebase summaries for AI agents. ~750 tokens vs 100k+. Feed your entire project to an LLM without blowing the context window. |
🤖 AI Agent Security - Static analyzers, runtime auditors, and threat models for the emerging agent ecosystem. If you're deploying autonomous agents, these tools help you understand what they can access and where the gaps are.
agentlint · agentscan · agentconfig · agentdrift · agentflow · sandboxaudit · injectguard · promptaudit · sessionaudit · skillsafe
🔒 Supply Chain & Infrastructure - Hardening the pipeline. Typosquatting detection, lockfile integrity, container security, CI/CD analysis, secret management.
depsafe · typosafe · lockaudit · dockaudit · composeaudit · ghaaudit · ciaudit · hookaudit · wheelaudit · setupaudit
🛡️ Code Quality & Security Analysis - AST-based static analyzers for Python. Crypto misuse, SQL injection, async antipatterns, resource leaks, error handling.
cryptaudit · sqlsafe · asyncaudit · leakaudit · erroraudit · vibecheck · edgecheck · perfaudit
📐 Frameworks & Research - Threat models, governance frameworks, and design patterns for organizations deploying AI at scale.
agent-security-patterns · secure-openclaw-patterns · staged-autonomy-patterns · rai-framework
Recent posts from AI Risk Praxis:
- A Different Way of Working - What knowledge workers need to understand about working with AI in 2026
- The AI Security Industry is Bullshit - No one understands AI security and people are about to get hurt
- The Weight of Watching It Happen - Notes from inside the AI disruption
- Making Sense of Agentic AI Governance - How to think about governance when agents access real data and take real actions
Most tools are zero-dependency Python, designed to run anywhere without pip install. CI-ready with JSON output and A-F grading.
