Skip to content

Getting Started: Run a Local LLM Security Proxy

Install and run OpenGuard in under a minute. Set up the LLM security proxy for Claude Code, Codex, or any OpenAI-compatible agent with one command.

OpenGuard is a local proxy that sits between your app and an LLM provider (OpenAI, Anthropic, or any OpenAI-compatible API like Ollama). Every request and response passes through a pipeline of guards — they can redact PII, block keywords, cap token usage, and more. Your app talks to OpenGuard exactly like it would talk to the real API; it doesn’t need to know OpenGuard exists.

Your appOpenGuard:23294 · guards run hereLLM provider

There are three ways to run OpenGuard. Pick the one that fits your setup:

ModeWhat it doesBest for
LaunchStarts OpenGuard in the background, configures a coding agent to route through it, and launches the agent — all in one command.Claude Code, Codex, OpenCode users
ServeRuns OpenGuard as a standalone proxy. You point your own app or SDK at it.Custom apps, any OpenAI/Anthropic SDK
DockerSame as Serve, but containerized. No Python required on the host.CI pipelines, production, Python-free setups

All three modes listen on port 23294 by default and load guards from guards.yaml in the working directory.


One command. OpenGuard starts in the background, wires the agent to route through it, then launches the agent. When the agent exits, OpenGuard stops automatically. The agent manages its own API credentials — no extra key configuration needed.

uvx is a Python package runner (like npx for Node). It downloads and runs OpenGuard in an isolated environment. Requires Python 3.10+.

Terminal window
uvx openguard launch claude # Claude Code
uvx openguard launch codex # Codex
uvx openguard launch opencode # OpenCode

Any extra arguments are forwarded to the agent:

Terminal window
uvx openguard launch claude --model sonnet

OpenGuard ships with a built-in preset for coding agents — use it to get protection out of the box:

Terminal window
OPENGUARD_CONFIG=presets/agentic.yaml uvx openguard launch claude

Or point to your own config:

Terminal window
OPENGUARD_CONFIG=./guards.yaml uvx openguard launch claude

See Configure guards below for how to write your own, or Presets for what the built-in configs cover.


Runs OpenGuard as a long-running proxy. You point your own app or SDK at it.

Tell OpenGuard which provider to forward to. You only need the one that applies to you:

Terminal window
# OpenAI
export OPENGUARD_OPENAI_KEY_1="sk-..."
# Anthropic
export OPENGUARD_ANTHROPIC_KEY_1="sk-ant-..."
# Local models (Ollama, LM Studio, etc.) — no key needed.
# OpenGuard forwards to http://localhost:11434/v1 by default.

OpenGuard forwards your key to the provider on each request. It does not store or log keys.

Terminal window
uvx openguard serve

To use a built-in preset or your own config:

Terminal window
# Built-in preset (secrets, PII, prompt injection, dangerous commands)
uvx openguard serve --config presets/agentic.yaml
# Your own config
uvx openguard serve --config ./guards.yaml

Then change the base URL in your SDK to localhost:23294. Use your real provider API key — OpenGuard forwards it as-is:

from openai import OpenAI
client = OpenAI(
base_url="http://localhost:23294/v1",
api_key="sk-...", # your real OpenAI key
)

Your existing code works identically — the only change is the base URL.


Same as Serve, but containerized — no Python needed. Pass your provider key with -e and mount your guard config with -v:

Terminal window
docker run -p 23294:23294 \
-e OPENGUARD_OPENAI_KEY_1="sk-..." \
-v ./guards.yaml:/app/guards.yaml \
ghcr.io/Jitera-Labs/openguard:main

Then point your SDK at http://localhost:23294 the same way as in Serve mode.

For Anthropic, swap the env var:

Terminal window
docker run -p 23294:23294 \
-e OPENGUARD_ANTHROPIC_KEY_1="sk-ant-..." \
-v ./guards.yaml:/app/guards.yaml \
ghcr.io/Jitera-Labs/openguard:main

Create a file called guards.yaml in the directory where you run OpenGuard:

guards:
- match:
model:
_ilike: "%" # matches every model name
apply:
- type: pii_filter # redacts emails, phone numbers, SSNs, etc.
config: {}
- type: keyword_filter
config:
keywords: ["secret", "confidential"]
action: block # rejects the entire request if a keyword is found

OpenGuard loads guards.yaml from the current directory automatically. To use a different path:

Terminal window
OPENGUARD_CONFIG=./my-guards.yaml uvx openguard launch claude

With no config file present, all traffic passes through untouched — useful for verifying the proxy works before adding rules.

  • Configuration — match filters, environment variables, multi-file configs.
  • PII Filter — what PII patterns are detected and redacted.
  • Keyword Filter — blocking, redacting, or auditing by keyword or regex.