AI security guardrails for LLM applications
Open-source, modular guardrails to protect agents from hallucinations, prompt injection, and sensitive data leaks.
Install package
pip install "pydefend"
Setup configuration
defend init
Run defend
defend serve
Pipeline
Guardrails before and after the model
Apply policy-driven checks on both sides of your LLM call.
Input
User prompt / tool input
LLM
Provider call unchanged
Model
Response
Return to user / tools
Input guard
Topic control, PII, injection detection.
Output guard
Prompt leak, PII, safety checks, constraints.
Input guard
Topic control, PII, injection detection.
Input
User prompt / tool input
LLM
Provider call unchanged
Model
Response
Return to user / tools
Output guard
Prompt leak, PII, safety checks, constraints.
Getting started
A simple setup to get started
Pick a provider, choose a model, and get a one-line defend init command.
Your setup
defend init --token "defend_v1_eNp1jjEOwzAMA__C2UtXf6XooFRKYVSxDScKUAT6e-2gS4dsBHmUeOBl1HhFPJBytW2IpbCpdO_-CKit7ImlIYJllszwgGLbj5VMkwojzqSrhMtuqZ1McD8R0fHRL_BzTb-nEz3fiNlUR5gWap-_GTvizb8LS0Nz"
Add guardrails in minutes
Install with pip install pydefend and start guarding input and output.