OpenGuard is a local proxy that runs between a coding agent (like Claude Code, Cursor, or any tool using OpenAI/Anthropic APIs) and the LLM provider. All LLM traffic passes through it, where configured guards evaluate each request and response before forwarding them on.
Guards are configured in a YAML file and cover things like PII redaction, keyword blocking, token caps, and semantic input inspection. There’s no telemetry — the proxy only connects to the providers you configure.