Getting Started: Run a Local LLM Security Proxy
Install and run OpenGuard in under a minute. Set up the LLM security proxy for Claude Code, Codex, or any OpenAI-compatible agent with one command.
OpenGuard is a local proxy that sits between your app and an LLM provider (OpenAI, Anthropic, or any OpenAI-compatible API like Ollama). Every request and response passes through a pipeline of guards — they can redact PII, block keywords, cap token usage, and more. Your app talks to OpenGuard exactly like it would talk to the real API; it doesn’t need to know OpenGuard exists.
How you can run OpenGuard
Section titled “How you can run OpenGuard”There are three ways to run OpenGuard. Pick the one that fits your setup:
| Mode | What it does | Best for |
|---|---|---|
| Launch | Starts OpenGuard in the background, configures a coding agent to route through it, and launches the agent — all in one command. | Claude Code, Codex, OpenCode users |
| Serve | Runs OpenGuard as a standalone proxy. You point your own app or SDK at it. | Custom apps, any OpenAI/Anthropic SDK |
| Docker | Same as Serve, but containerized. No Python required on the host. | CI pipelines, production, Python-free setups |
All three modes listen on port 23294 by default and load guards from guards.yaml in the working directory.
Launch mode
Section titled “Launch mode”One command. OpenGuard starts in the background, wires the agent to route through it, then launches the agent. When the agent exits, OpenGuard stops automatically. The agent manages its own API credentials — no extra key configuration needed.
uvx is a Python package runner (like npx for Node). It downloads and runs OpenGuard in an isolated environment. Requires Python 3.10+.
uvx openguard launch claude # Claude Codeuvx openguard launch codex # Codexuvx openguard launch opencode # OpenCodeAny extra arguments are forwarded to the agent:
uvx openguard launch claude --model sonnetOpenGuard ships with a built-in preset for coding agents — use it to get protection out of the box:
OPENGUARD_CONFIG=presets/agentic.yaml uvx openguard launch claudeOr point to your own config:
OPENGUARD_CONFIG=./guards.yaml uvx openguard launch claudeSee Configure guards below for how to write your own, or Presets for what the built-in configs cover.
Serve mode
Section titled “Serve mode”Runs OpenGuard as a long-running proxy. You point your own app or SDK at it.
Provider API keys
Section titled “Provider API keys”Tell OpenGuard which provider to forward to. You only need the one that applies to you:
# OpenAIexport OPENGUARD_OPENAI_KEY_1="sk-..."
# Anthropicexport OPENGUARD_ANTHROPIC_KEY_1="sk-ant-..."
# Local models (Ollama, LM Studio, etc.) — no key needed.# OpenGuard forwards to http://localhost:11434/v1 by default.OpenGuard forwards your key to the provider on each request. It does not store or log keys.
Start the proxy
Section titled “Start the proxy”uvx openguard serveTo use a built-in preset or your own config:
# Built-in preset (secrets, PII, prompt injection, dangerous commands)uvx openguard serve --config presets/agentic.yaml
# Your own configuvx openguard serve --config ./guards.yamlThen change the base URL in your SDK to localhost:23294. Use your real provider API key — OpenGuard forwards it as-is:
from openai import OpenAI
client = OpenAI( base_url="http://localhost:23294/v1", api_key="sk-...", # your real OpenAI key)import anthropic
client = anthropic.Anthropic( base_url="http://localhost:23294", api_key="sk-ant-...", # your real Anthropic key)curl http://localhost:23294/v1/chat/completions \ -H "Authorization: Bearer sk-..." \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}] }'Your existing code works identically — the only change is the base URL.
Docker
Section titled “Docker”Same as Serve, but containerized — no Python needed. Pass your provider key with -e and mount your guard config with -v:
docker run -p 23294:23294 \ -e OPENGUARD_OPENAI_KEY_1="sk-..." \ -v ./guards.yaml:/app/guards.yaml \ ghcr.io/Jitera-Labs/openguard:mainThen point your SDK at http://localhost:23294 the same way as in Serve mode.
For Anthropic, swap the env var:
docker run -p 23294:23294 \ -e OPENGUARD_ANTHROPIC_KEY_1="sk-ant-..." \ -v ./guards.yaml:/app/guards.yaml \ ghcr.io/Jitera-Labs/openguard:mainConfigure guards
Section titled “Configure guards”Create a file called guards.yaml in the directory where you run OpenGuard:
guards: - match: model: _ilike: "%" # matches every model name apply: - type: pii_filter # redacts emails, phone numbers, SSNs, etc. config: {} - type: keyword_filter config: keywords: ["secret", "confidential"] action: block # rejects the entire request if a keyword is foundOpenGuard loads guards.yaml from the current directory automatically. To use a different path:
OPENGUARD_CONFIG=./my-guards.yaml uvx openguard launch claudeuvx openguard serve --config ./my-guards.yamldocker run -p 23294:23294 \ -e OPENGUARD_OPENAI_KEY_1="sk-..." \ -v ./my-guards.yaml:/app/guards.yaml \ ghcr.io/Jitera-Labs/openguard:mainWith no config file present, all traffic passes through untouched — useful for verifying the proxy works before adding rules.
Next steps
Section titled “Next steps”- Configuration — match filters, environment variables, multi-file configs.
- PII Filter — what PII patterns are detected and redacted.
- Keyword Filter — blocking, redacting, or auditing by keyword or regex.