An open standard for defining security boundaries of AI agents. Like README.md, but for security. Add an AGENTSECURITY.md to any agent project and enforce safe defaults before your agent runs.
# Install the validator
pip install agentsec
# Initialize a security policy
agentsec init --tier standard
# Validate your policy
agentsec validate .
# Scan your codebase for violations
agentsec check .
Why pre-build security matters for autonomous agents. The threat landscape and how AGENTSECURITY.md addresses it.
Complete format reference for AGENTSECURITY.md. Fields, tiers, validation rules, and compliance mappings.
Integration guide for agent frameworks, CI/CD pipelines, and editors. Templates and examples.
Prototypes & internal tools. Minimal friction, basic guardrails.
Production agents. Tool allowlisting, HITL for high-risk, OWASP alignment.
Sensitive data & financial systems. Mandatory sandbox, full audit, NIST alignment.
Healthcare, finance, government. Tamper-proof audit, dual approval, full compliance mapping.
Define security boundaries before your agent runs, not after a breach. Architecture-level safety, not runtime patches.
Works with LangChain, CrewAI, AutoGen, Claude Code, and any future framework. One standard, everywhere.
Metadata loads at startup (~100 tokens). Full policy on activation. Detailed references on demand. Minimal context overhead.
Every template acknowledges what the spec can and cannot do. No security theater. No false promises.
GitHub Action blocks insecure PRs. JSON reports feed into your existing security pipeline. Pre-commit hooks available.
Each rule maps to OWASP LLM Top 10, NIST AI RMF, ISO 42001, and EU AI Act controls. Automated compliance reporting.
Show that your agent has a declared security policy.
[](https://agentsecurity.dev)