Basic configuration examples
The following examples provide some typical configurations for enabling
the ai-proxy-advanced
plugin on a
service.
Make the following request:
curl -X POST http://localhost:8001/services/{serviceName|Id}/plugins \
--header "accept: application/json" \
--header "Content-Type: application/json" \
--data '
{
"name": "ai-proxy-advanced",
"config": {
"targets": [
{
"route_type": "llm/v1/chat",
"auth": {
"header_name": "Authorization",
"header_value": "Bearer token"
},
"model": {
"provider": "openai"
}
},
{
"route_type": "llm/v1/chat",
"model": {
"provider": "llama2",
"options": {
"llama2_format": "openai",
"upstream_url": "http://httpbin"
}
}
}
]
}
}
'
Replace SERVICE_NAME|ID
with the id
or name
of the service that this plugin configuration will target.
Make the following request, substituting your own access token, region, control plane ID, and service ID:
curl -X POST \
https://{us|eu}.api.konghq.com/v2/control-planes/{controlPlaneId}/core-entities/services/{serviceId}/plugins \
--header "accept: application/json" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer TOKEN" \
--data '{"name":"ai-proxy-advanced","config":{"targets":[{"route_type":"llm/v1/chat","auth":{"header_name":"Authorization","header_value":"Bearer token"},"model":{"provider":"openai"}},{"route_type":"llm/v1/chat","model":{"provider":"llama2","options":{"llama2_format":"openai","upstream_url":"http://httpbin"}}}]}}'
See the Konnect API reference to learn about region-specific URLs and personal access tokens.
First, create a KongPlugin resource:
echo "
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: ai-proxy-advanced-example
plugin: ai-proxy-advanced
config:
targets:
- route_type: llm/v1/chat
auth:
header_name: Authorization
header_value: Bearer token
model:
provider: openai
- route_type: llm/v1/chat
model:
provider: llama2
options:
llama2_format: openai
upstream_url: http://httpbin
" | kubectl apply -f -
Next, apply the KongPlugin
resource to an ingress by annotating the service
as follows:
kubectl annotate service SERVICE_NAME konghq.com/plugins=ai-proxy-advanced-example
Replace SERVICE_NAME
with the name of the service that this plugin configuration will target.
You can see your available ingresses by running kubectl get service
.
Note: The KongPlugin resource only needs to be defined once and can be applied to any service, consumer, or route in the namespace. If you want the plugin to be available cluster-wide, create the resource as aKongClusterPlugin
instead ofKongPlugin
.
Add this section to your declarative configuration file:
plugins:
- name: ai-proxy-advanced
service: SERVICE_NAME|ID
config:
targets:
- route_type: llm/v1/chat
auth:
header_name: Authorization
header_value: Bearer token
model:
provider: openai
- route_type: llm/v1/chat
model:
provider: llama2
options:
llama2_format: openai
upstream_url: http://httpbin
Replace SERVICE_NAME|ID
with the id
or name
of the service that this plugin configuration will target.
Prerequisite: Configure your Personal Access Token
terraform {
required_providers {
konnect = {
source = "kong/konnect"
}
}
}
provider "konnect" {
personal_access_token = "kpat_YOUR_TOKEN"
server_url = "https://us.api.konghq.com/"
}
Add the following to your Terraform configuration to create a Konnect Gateway Plugin:
resource "konnect_gateway_plugin_ai_proxy_advanced" "my_ai_proxy_advanced" {
enabled = true
config = {
targets = [
{
route_type = "llm/v1/chat"
auth = {
header_name = "Authorization"
header_value = "Bearer token"
}
model = {
provider = "openai"
}
},
{
route_type = "llm/v1/chat"
model = {
provider = "llama2"
options = {
llama2_format = "openai"
upstream_url = "http://httpbin"
}
}
} ]
}
control_plane_id = konnect_gateway_control_plane.my_konnect_cp.id
service = {
id = konnect_gateway_service.my_service.id
}
}
The following examples provide some typical configurations for enabling
the ai-proxy-advanced
plugin on a
route.
Make the following request:
curl -X POST http://localhost:8001/routes/{routeName|Id}/plugins \
--header "accept: application/json" \
--header "Content-Type: application/json" \
--data '
{
"name": "ai-proxy-advanced",
"config": {
"targets": [
{
"route_type": "llm/v1/chat",
"auth": {
"header_name": "Authorization",
"header_value": "Bearer token"
},
"model": {
"provider": "openai"
}
},
{
"route_type": "llm/v1/chat",
"model": {
"provider": "llama2",
"options": {
"llama2_format": "openai",
"upstream_url": "http://httpbin"
}
}
}
]
}
}
'
Replace ROUTE_NAME|ID
with the id
or name
of the route that this plugin configuration will target.
Make the following request, substituting your own access token, region, control plane ID, and route ID:
curl -X POST \
https://{us|eu}.api.konghq.com/v2/control-planes/{controlPlaneId}/core-entities/routes/{routeId}/plugins \
--header "accept: application/json" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer TOKEN" \
--data '{"name":"ai-proxy-advanced","config":{"targets":[{"route_type":"llm/v1/chat","auth":{"header_name":"Authorization","header_value":"Bearer token"},"model":{"provider":"openai"}},{"route_type":"llm/v1/chat","model":{"provider":"llama2","options":{"llama2_format":"openai","upstream_url":"http://httpbin"}}}]}}'
See the Konnect API reference to learn about region-specific URLs and personal access tokens.
First, create a KongPlugin resource:
echo "
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: ai-proxy-advanced-example
plugin: ai-proxy-advanced
config:
targets:
- route_type: llm/v1/chat
auth:
header_name: Authorization
header_value: Bearer token
model:
provider: openai
- route_type: llm/v1/chat
model:
provider: llama2
options:
llama2_format: openai
upstream_url: http://httpbin
" | kubectl apply -f -
Next, apply the KongPlugin
resource to an ingress by annotating the ingress
as follows:
kubectl annotate ingress INGRESS_NAME konghq.com/plugins=ai-proxy-advanced-example
Replace INGRESS_NAME
with the name of the ingress that this plugin configuration will target.
You can see your available ingresses by running kubectl get ingress
.
Note: The KongPlugin resource only needs to be defined once and can be applied to any service, consumer, or route in the namespace. If you want the plugin to be available cluster-wide, create the resource as aKongClusterPlugin
instead ofKongPlugin
.
Add this section to your declarative configuration file:
plugins:
- name: ai-proxy-advanced
route: ROUTE_NAME|ID
config:
targets:
- route_type: llm/v1/chat
auth:
header_name: Authorization
header_value: Bearer token
model:
provider: openai
- route_type: llm/v1/chat
model:
provider: llama2
options:
llama2_format: openai
upstream_url: http://httpbin
Replace ROUTE_NAME|ID
with the id
or name
of the route that this plugin configuration will target.
Prerequisite: Configure your Personal Access Token
terraform {
required_providers {
konnect = {
source = "kong/konnect"
}
}
}
provider "konnect" {
personal_access_token = "kpat_YOUR_TOKEN"
server_url = "https://us.api.konghq.com/"
}
Add the following to your Terraform configuration to create a Konnect Gateway Plugin:
resource "konnect_gateway_plugin_ai_proxy_advanced" "my_ai_proxy_advanced" {
enabled = true
config = {
targets = [
{
route_type = "llm/v1/chat"
auth = {
header_name = "Authorization"
header_value = "Bearer token"
}
model = {
provider = "openai"
}
},
{
route_type = "llm/v1/chat"
model = {
provider = "llama2"
options = {
llama2_format = "openai"
upstream_url = "http://httpbin"
}
}
} ]
}
control_plane_id = konnect_gateway_control_plane.my_konnect_cp.id
route = {
id = konnect_gateway_route.my_route.id
}
}
The following examples provide some typical configurations for enabling
the ai-proxy-advanced
plugin on a
consumer.
Make the following request:
curl -X POST http://localhost:8001/consumers/{consumerName|Id}/plugins \
--header "accept: application/json" \
--header "Content-Type: application/json" \
--data '
{
"name": "ai-proxy-advanced",
"config": {
"targets": [
{
"route_type": "llm/v1/chat",
"auth": {
"header_name": "Authorization",
"header_value": "Bearer token"
},
"model": {
"provider": "openai"
}
},
{
"route_type": "llm/v1/chat",
"model": {
"provider": "llama2",
"options": {
"llama2_format": "openai",
"upstream_url": "http://httpbin"
}
}
}
]
}
}
'
Replace CONSUMER_NAME|ID
with the id
or name
of the consumer that this plugin configuration will target.
Make the following request, substituting your own access token, region, control plane ID, and consumer ID:
curl -X POST \
https://{us|eu}.api.konghq.com/v2/control-planes/{controlPlaneId}/core-entities/consumers/{consumerId}/plugins \
--header "accept: application/json" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer TOKEN" \
--data '{"name":"ai-proxy-advanced","config":{"targets":[{"route_type":"llm/v1/chat","auth":{"header_name":"Authorization","header_value":"Bearer token"},"model":{"provider":"openai"}},{"route_type":"llm/v1/chat","model":{"provider":"llama2","options":{"llama2_format":"openai","upstream_url":"http://httpbin"}}}]}}'
See the Konnect API reference to learn about region-specific URLs and personal access tokens.
First, create a KongPlugin resource:
echo "
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: ai-proxy-advanced-example
plugin: ai-proxy-advanced
config:
targets:
- route_type: llm/v1/chat
auth:
header_name: Authorization
header_value: Bearer token
model:
provider: openai
- route_type: llm/v1/chat
model:
provider: llama2
options:
llama2_format: openai
upstream_url: http://httpbin
" | kubectl apply -f -
Next, apply the KongPlugin
resource to an ingress by annotating the KongConsumer
object as follows:
kubectl annotate KongConsumer CONSUMER_NAME konghq.com/plugins=ai-proxy-advanced-example
Replace CONSUMER_NAME
with the name of the consumer that this plugin configuration will target.
You can see your available consumers by running kubectl get KongConsumer
.
To learn more about KongConsumer
objects, see Provisioning Consumers and Credentials.
Note: The KongPlugin resource only needs to be defined once and can be applied to any service, consumer, or route in the namespace. If you want the plugin to be available cluster-wide, create the resource as aKongClusterPlugin
instead ofKongPlugin
.
Add this section to your declarative configuration file:
plugins:
- name: ai-proxy-advanced
consumer: CONSUMER_NAME|ID
config:
targets:
- route_type: llm/v1/chat
auth:
header_name: Authorization
header_value: Bearer token
model:
provider: openai
- route_type: llm/v1/chat
model:
provider: llama2
options:
llama2_format: openai
upstream_url: http://httpbin
Replace CONSUMER_NAME|ID
with the id
or name
of the consumer that this plugin configuration will target.
Prerequisite: Configure your Personal Access Token
terraform {
required_providers {
konnect = {
source = "kong/konnect"
}
}
}
provider "konnect" {
personal_access_token = "kpat_YOUR_TOKEN"
server_url = "https://us.api.konghq.com/"
}
Add the following to your Terraform configuration to create a Konnect Gateway Plugin:
resource "konnect_gateway_plugin_ai_proxy_advanced" "my_ai_proxy_advanced" {
enabled = true
config = {
targets = [
{
route_type = "llm/v1/chat"
auth = {
header_name = "Authorization"
header_value = "Bearer token"
}
model = {
provider = "openai"
}
},
{
route_type = "llm/v1/chat"
model = {
provider = "llama2"
options = {
llama2_format = "openai"
upstream_url = "http://httpbin"
}
}
} ]
}
control_plane_id = konnect_gateway_control_plane.my_konnect_cp.id
consumer = {
id = konnect_gateway_consumer.my_consumer.id
}
}
The following examples provide some typical configurations for enabling
the ai-proxy-advanced
plugin on a
consumer group.
Make the following request:
curl -X POST http://localhost:8001/consumer_groups/{consumerGroupName|Id}/plugins \
--header "accept: application/json" \
--header "Content-Type: application/json" \
--data '
{
"name": "ai-proxy-advanced",
"config": {
"targets": [
{
"route_type": "llm/v1/chat",
"auth": {
"header_name": "Authorization",
"header_value": "Bearer token"
},
"model": {
"provider": "openai"
}
},
{
"route_type": "llm/v1/chat",
"model": {
"provider": "llama2",
"options": {
"llama2_format": "openai",
"upstream_url": "http://httpbin"
}
}
}
]
}
}
'
Replace CONSUMER_GROUP_NAME|ID
with the id
or name
of the consumer group that this plugin configuration will target.
Make the following request, substituting your own access token, region, control plane ID, and consumer group ID:
curl -X POST \
https://{us|eu}.api.konghq.com/v2/control-planes/{controlPlaneId}/core-entities/consumer_groups/{consumerGroupId}/plugins \
--header "accept: application/json" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer TOKEN" \
--data '{"name":"ai-proxy-advanced","config":{"targets":[{"route_type":"llm/v1/chat","auth":{"header_name":"Authorization","header_value":"Bearer token"},"model":{"provider":"openai"}},{"route_type":"llm/v1/chat","model":{"provider":"llama2","options":{"llama2_format":"openai","upstream_url":"http://httpbin"}}}]}}'
See the Konnect API reference to learn about region-specific URLs and personal access tokens.
First, create a KongPlugin resource:
echo "
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: ai-proxy-advanced-example
plugin: ai-proxy-advanced
config:
targets:
- route_type: llm/v1/chat
auth:
header_name: Authorization
header_value: Bearer token
model:
provider: openai
- route_type: llm/v1/chat
model:
provider: llama2
options:
llama2_format: openai
upstream_url: http://httpbin
" | kubectl apply -f -
Next, apply the KongPlugin
resource to an ingress by annotating the KongConsumerGroup
object as follows:
kubectl annotate KongConsumerGroup CONSUMER_GROUP_NAME konghq.com/plugins=ai-proxy-advanced-example
Replace CONSUMER_GROUP_NAME
with the name of the consumer group that this plugin configuration will target.
You can see your available consumer groups by running kubectl get KongConsumerGroup
.
Note: The KongPlugin resource only needs to be defined once and can be applied to any service, consumer, consumer group, or route in the namespace. If you want the plugin to be available cluster-wide, create the resource as aKongClusterPlugin
instead ofKongPlugin
.
Add this section to your declarative configuration file:
plugins:
- name: ai-proxy-advanced
consumer_group: CONSUMER_GROUP_NAME|ID
config:
targets:
- route_type: llm/v1/chat
auth:
header_name: Authorization
header_value: Bearer token
model:
provider: openai
- route_type: llm/v1/chat
model:
provider: llama2
options:
llama2_format: openai
upstream_url: http://httpbin
Replace CONSUMER_GROUP_NAME|ID
with the id
or name
of the consumer group that this plugin configuration will target.
Prerequisite: Configure your Personal Access Token
terraform {
required_providers {
konnect = {
source = "kong/konnect"
}
}
}
provider "konnect" {
personal_access_token = "kpat_YOUR_TOKEN"
server_url = "https://us.api.konghq.com/"
}
Add the following to your Terraform configuration to create a Konnect Gateway Plugin:
resource "konnect_gateway_plugin_ai_proxy_advanced" "my_ai_proxy_advanced" {
enabled = true
config = {
targets = [
{
route_type = "llm/v1/chat"
auth = {
header_name = "Authorization"
header_value = "Bearer token"
}
model = {
provider = "openai"
}
},
{
route_type = "llm/v1/chat"
model = {
provider = "llama2"
options = {
llama2_format = "openai"
upstream_url = "http://httpbin"
}
}
} ]
}
control_plane_id = konnect_gateway_control_plane.my_konnect_cp.id
consumer_group = {
id = konnect_gateway_consumer_group.my_consumer_group.id
}
}
A plugin which is not associated to any service, route, consumer, or consumer group is considered global, and will be run on every request.
- In self-managed Kong Gateway Enterprise, the plugin applies to every entity in a given workspace.
- In self-managed Kong Gateway (OSS), the plugin applies to your entire environment.
- In Konnect, the plugin applies to every entity in a given control plane.
Read the Plugin Reference and the Plugin Precedence sections for more information.
The following examples provide some typical configurations for enabling
the AI Proxy Advanced
plugin globally.
Make the following request:
curl -X POST http://localhost:8001/plugins/ \
--header "accept: application/json" \
--header "Content-Type: application/json" \
--data '
{
"name": "ai-proxy-advanced",
"config": {
"targets": [
{
"route_type": "llm/v1/chat",
"auth": {
"header_name": "Authorization",
"header_value": "Bearer token"
},
"model": {
"provider": "openai"
}
},
{
"route_type": "llm/v1/chat",
"model": {
"provider": "llama2",
"options": {
"llama2_format": "openai",
"upstream_url": "http://httpbin"
}
}
}
]
}
}
'
Make the following request, substituting your own access token, region, and control plane ID:
curl -X POST \
https://{us|eu}.api.konghq.com/v2/control-planes/{controlPlaneId}/core-entities/plugins/ \
--header "accept: application/json" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer TOKEN" \
--data '{"name":"ai-proxy-advanced","config":{"targets":[{"route_type":"llm/v1/chat","auth":{"header_name":"Authorization","header_value":"Bearer token"},"model":{"provider":"openai"}},{"route_type":"llm/v1/chat","model":{"provider":"llama2","options":{"llama2_format":"openai","upstream_url":"http://httpbin"}}}]}}'
See the Konnect API reference to learn about region-specific URLs and personal access tokens.
Create a KongClusterPlugin resource and label it as global:
apiVersion: configuration.konghq.com/v1
kind: KongClusterPlugin
metadata:
name: <global-ai-proxy-advanced>
annotations:
kubernetes.io/ingress.class: kong
labels:
global: "true"
config:
targets:
- route_type: llm/v1/chat
auth:
header_name: Authorization
header_value: Bearer token
model:
provider: openai
- route_type: llm/v1/chat
model:
provider: llama2
options:
llama2_format: openai
upstream_url: http://httpbin
plugin: ai-proxy-advanced
Add a plugins
entry in the declarative configuration file:
plugins:
- name: ai-proxy-advanced
config:
targets:
- route_type: llm/v1/chat
auth:
header_name: Authorization
header_value: Bearer token
model:
provider: openai
- route_type: llm/v1/chat
model:
provider: llama2
options:
llama2_format: openai
upstream_url: http://httpbin
Prerequisite: Configure your Personal Access Token
terraform {
required_providers {
konnect = {
source = "kong/konnect"
}
}
}
provider "konnect" {
personal_access_token = "kpat_YOUR_TOKEN"
server_url = "https://us.api.konghq.com/"
}
Add the following to your Terraform configuration to create a Konnect Gateway Plugin:
resource "konnect_gateway_plugin_ai_proxy_advanced" "my_ai_proxy_advanced" {
enabled = true
config = {
targets = [
{
route_type = "llm/v1/chat"
auth = {
header_name = "Authorization"
header_value = "Bearer token"
}
model = {
provider = "openai"
}
},
{
route_type = "llm/v1/chat"
model = {
provider = "llama2"
options = {
llama2_format = "openai"
upstream_url = "http://httpbin"
}
}
} ]
}
control_plane_id = konnect_gateway_control_plane.my_konnect_cp.id
}