AgentSecurity is an open standard that defines security boundaries
for autonomous AI agents through a simple, declarative file: AGENTSECURITY.md.
Think of it as README.md for security. It lives in the root of your agent project
and tells the agent (and anyone reading the code) exactly what the agent is allowed to do,
what it must never do, and what requires human approval.
AI agents are being deployed with increasing autonomy. They can execute code, call APIs, modify databases, send emails, and manage infrastructure. But most developers building agents don't have security expertise:
The result: agents with implicit, undefined security boundaries running in production.
Instead of teaching every developer security, embed safe defaults into a file they can copy, customize, and validate.
AGENTSECURITY.md provides:
agentsec check . verifies your code matches your declared
policy.agentsec init --tier standard
This creates a pre-configured policy file. Customize it for your agent's actual tools and capabilities.
agentsec validate .
Checks that all required fields are present and tier-specific requirements are met.
agentsec check .
Detects undeclared tool usage, hardcoded secrets, dangerous code patterns, and policy gaps.
Add the GitHub Action to block PRs that violate your security policy.
Inspired by Agent Skills, AgentSecurity uses a three-layer progressive disclosure model to minimize context window overhead:
Not a runtime guarantee. AGENTSECURITY.md defines intent, not enforcement. A compromised LLM can still violate the policy. Runtime enforcement requires additional infrastructure (proxy, gateway).
Not a certification. Declaring NIST alignment means you've designed with NIST in mind. Formal certification requires independent audit.
Not a substitute for expertise. Templates provide safe defaults, but complex architectures need human security review.
It IS a security abstraction layer. It lets developers who don't know OWASP, NIST, or IAM still ship agents with declared, validated security boundaries.