Transform a response using OpenAI in Kong Gateway
Enable the AI Response Transformer plugin, configure the parameters under config.llm
to access your LLM and describe the transformation to perform with the config.prompt
parameter.
Prerequisites
Kong Konnect
This is a Konnect tutorial and requires a Konnect personal access token.
-
Create a new personal access token by opening the Konnect PAT page and selecting Generate Token.
-
Export your token to an environment variable:
export KONNECT_TOKEN='YOUR_KONNECT_PAT'
Copied to clipboard! -
Run the quickstart script to automatically provision a Control Plane and Data Plane, and configure your environment:
curl -Ls https://get.konghq.com/quickstart | bash -s -- -k $KONNECT_TOKEN --deck-output
Copied to clipboard!This sets up a Konnect Control Plane named
quickstart
, provisions a local Data Plane, and prints out the following environment variable exports:export DECK_KONNECT_TOKEN=$KONNECT_TOKEN export DECK_KONNECT_CONTROL_PLANE_NAME=quickstart export KONNECT_CONTROL_PLANE_URL=https://us.api.konghq.com export KONNECT_PROXY_URL='http://localhost:8000'
Copied to clipboard!Copy and paste these into your terminal to configure your session.
Enable the AI Response Transformer plugin
In this example, we want to inject a new header in the response after it’s proxied and before it’s returned to the client. To add a new header, we need to:
- Specify the response format to use in the prompt.
- Set the
config.parse_llm_response_json_instructions
parameter totrue
.
We also want to make sure that the LLM only returns the JSON content and doesn’t add extra text around it. There are two ways to do this:
- Include this in the prompt, by adding “Return only the JSON message, no extra text” for example.
- Specify a regex in the
config.transformation_extract_pattern
parameter to extract only the data we need. This is the option we’ll use in this example.
Configure the AI Response Transformer plugin with the required LLM details, the transformation prompt, and the expected response body pattern to extract:
echo '
_format_version: "3.0"
plugins:
- name: ai-response-transformer
config:
prompt: |
Add a new header named "new-header" with the value "header-value" to the response. Format the JSON response as follows:
{
"headers":
{
"new-header": "header-value"
},
"status": 201,
"body": "new response body"
}
transformation_extract_pattern: "{((.|\\n)*)}"
parse_llm_response_json_instructions: true
llm:
route_type: llm/v1/chat
auth:
header_name: Authorization
header_value: Bearer ${{ env "DECK_OPENAI_API_KEY" }}
model:
provider: openai
name: gpt-4
' | deck gateway apply -
Validate
To check that the response transformation is working, send a request:
curl -i "$KONNECT_PROXY_URL/anything" \
-H "Accept: application/json"
curl -i "http://localhost:8000/anything" \
-H "Accept: application/json"
The response should contain the new header: new-header: header-value
.
Cleanup
Clean up Konnect environment
If you created a new control plane and want to conserve your free trial credits or avoid unnecessary charges, delete the new control plane used in this tutorial.