Skip to content

Configuration: YAML Guard Rules, Env Vars, and Presets

Configure YAML guard rules, environment variables, request matching, and presets for an LLM security proxy with OpenGuard.

OpenGuard loads guard rules from a YAML file (default: ./guards.yaml). Override with:

Terminal window
uvx openguard serve --config path/to/config.yaml
# or via environment variable:
OPENGUARD_CONFIG=path/to/config.yaml uvx openguard serve
# multiple files, merged in order:
OPENGUARD_CONFIG=base.yaml,overrides.yaml uvx openguard serve

In Docker, mount your config file into the container at /app/guards.yaml:

Terminal window
docker run -p 23294:23294 \
-v ./guards.yaml:/app/guards.yaml \
ghcr.io/Jitera-Labs/openguard:main

To use a file at a different path inside the container, set OPENGUARD_CONFIG:

Terminal window
docker run -p 23294:23294 \
-v ./my-config.yaml:/etc/openguard/guards.yaml \
-e OPENGUARD_CONFIG=/etc/openguard/guards.yaml \
ghcr.io/Jitera-Labs/openguard:main
guards:
- match: <filter>
apply:
- type: <guard_type>
config: { ... }

Each rule has a match filter and an ordered list of guards to apply. Rules are evaluated sequentially — multiple rules can match the same request, and all matching guards run in order.

Filters use a Hasura-style query syntax evaluated against the request context. Available fields: model, user, provider, and any other request parameters.

OperatorDescription
_eq, _neqEquality
_gt, _lt, _gte, _lteComparison
_in, _ninArray membership
_is_nullNull check
_like, _ilikeSQL-style wildcard (%) matching
_regex, _iregexRegex matching
_and, _or, _notLogical composition
# Match all models
match:
model:
_ilike: "%"
# Match GPT-4 variants
match:
model:
_ilike: "%gpt-4%"
# Combine conditions
match:
_and:
- model:
_iregex: "claude|gpt-4"
- user:
_eq: "admin"
VariableDefaultDescription
OPENGUARD_CONFIG./guards.yamlComma-separated guard config paths
OPENGUARD_OPENAI_URL_*http://localhost:11434/v1Downstream OpenAI-compatible URLs
OPENGUARD_OPENAI_KEY_*API keys for OpenAI-compatible providers
OPENGUARD_ANTHROPIC_URL_*Downstream Anthropic URLs
OPENGUARD_ANTHROPIC_KEY_*API keys for Anthropic providers
OPENGUARD_API_KEYSingle key to protect this proxy
OPENGUARD_API_KEYSSemicolon-separated additional proxy keys
OPENGUARD_PORT23294Server port
OPENGUARD_HOST0.0.0.0Server bind address
OPENGUARD_LOG_LEVELINFOLog level
OPENGUARD_CORS_ORIGINSSemicolon-separated allowed CORS origins
OPENGUARD_MODEL_FILTERHasura-style filter for downstream model list

Wildcard variables (* suffix) accept any name — e.g., OPENGUARD_OPENAI_KEY_PROD, OPENGUARD_OPENAI_KEY_2. All matching values are gathered automatically.

As an alternative to environment variables, create ~/.config/openguard/config.yaml:

port: 23294
providers:
- type: openai
key: sk-proj-...
- type: anthropic
key: sk-ant-...
log_level: INFO

OpenGuard ships with ready-made guard configs so you don’t have to write one from scratch:

PresetWhat it covers
presets/agentic.yamlSecrets leakage, PII exposure, prompt injection, dangerous shell commands. Tailored for coding agents. Default when running via the Docker wrapper.
presets/full.yamlExercises every guard type. Used by integration tests — also useful as a reference for all available options.
Terminal window
# Serve
uvx openguard serve --config presets/agentic.yaml
# Launch
OPENGUARD_CONFIG=presets/agentic.yaml uvx openguard launch claude

In Docker, the presets are already baked into the image at /app/presets/:

Terminal window
docker run -p 23294:23294 \
-e OPENGUARD_CONFIG=/app/presets/agentic.yaml \
-e OPENGUARD_OPENAI_KEY_1="sk-..." \
ghcr.io/Jitera-Labs/openguard:main

You can also layer a preset with your own overrides — files are merged in order:

Terminal window
OPENGUARD_CONFIG=presets/agentic.yaml,./my-overrides.yaml uvx openguard serve