You are browsing unreleased documentation.
The AI Prompt Guard configuration takes two arrays of objects: one for allow
patterns, and
one for deny
patterns.
Prerequisites
First, as in the AI Proxy documentation, create a service, route, and ai-proxy
plugin
that will serve as your LLM access point.
You can now create the AI Prompt Guard plugin at the global, service, or route level, using the following examples.
Examples
The following examples show allow and deny patterns used in a financial processing auditing model.
Card numbers adherence (“Allow Only”)
For requests to pass through in this example, any of the user
role messages in the prompt
must have all card
fields adhering to this standard (starts with integer 4, then 3 integers,
and finally 12 asterisks).
This plugin would prevent accidental processing (and/or subsequent model training) where full card numbers are sent in.
allow_patterns:
- '.*\"card\".*\"4[0-9]{3}\*{12}\"'
Card numbers adherence (“Deny Only”)
For requests to pass through in this example, an inverse of the above, any of the user
role messages
in the prompt must not be a card number field that starts with 5
.
deny_patterns:
- '\"card\".*\"5[0-9]{12}(?:[0-9]{3})?\"'
Valid products (“Allow AND Deny rules”)
This example uses an
ai-proxy
plugin that has been configured for thellm/v1/completions
route type. It expects only one JSON field: aprompt
string.
For requests to pass through in this example, the message(s) from the callers to our audit LLM request:
- Must contain at least one of the product names in the allow list
- Must not contain any of the product names in the deny list
allow_patterns:
- ".*(P|p)ears.*"
- ".*(P|p)eaches.*"
deny_patterns:
- ".*(A|a)pples.*"
- ".*(O|o)ranges.*"