Skip to content

LLM Input Inspection

Use an LLM to inspect prompts and tool output for policy violations, prompt injection, or abuse.

Use an LLM to inspect prompts and tool output for policy violations, prompt injection, or abuse.

This guard evaluates user inputs or tool outputs using an LLM. It’s highly flexible and can be instructed via the prompt configuration to look for specific patterns, tones, or policy violations.

FieldTypeDefaultDescription
prompt`stringnull`""
on_violationstring"block"Action to take when a violation is detected.
on_errorstring"allow"Action to take when the inspection fails (e.g., LLM error).
max_charsinteger8000Maximum characters from the end of the conversation to inspect.
inspector_model`stringnull`None
inspect_rolesarray["tool", "tool_result", "user"]Roles to inspect.
# Example 1
type: llm_input_inspection
config:
prompt: Block if the user is asking for personal identifiable information (PII).
on_violation: block
on_error: allow
inspector_model: gpt-4o-mini