AI Gateway Enterprise: This plugin is only available as part of our AI Gateway Enterprise offering.
The AI Proxy Advanced plugin lets you transform and proxy requests to multiple AI providers and models at the same time. This lets you set up load balancing between targets.
AI Proxy Advanced plugin accepts requests in one of a few defined and standardized formats, translates them to the configured target format, and then transforms the response back into a standard format.
AI Proxy Advanced plugin supports capabilities across batch processing, multimodal embeddings, agents, audio, image, streaming, and more, spanning multiple providers:
For Kong Gateway versions 3.6 or earlier:
Chat APIs: Multi-turn conversations with system/user/assistant roles.
Completions API: Generates free-form text from a prompt.
For Kong Gateway version v3.11+:
Batch, assistants, and files APIs: Support parallel LLM calls for efficiency. Assistants enable stateful, tool-augmented agents. Files provide persistent document storage for richer context across sessions.
Audio capabilities APIs: Provide speech-to-text transcription, real-time translation, and text-to-speech synthesis for voice agents, multilingual interfaces, and meeting analysis.
Image generation and editing APIs: Generate and modify images from text prompts to support multimodal agents with visual input and output.
Responses API: Return response metadata for debugging, evaluation, and response tuning.
AWS Bedrock agent APIs: Support advanced orchestration and real-time RAG with Converse, ConverseStream, RetrieveAndGenerate, and RetrieveAndGenerateStream.
Hugging Face text generation: Enable text generation and streaming using open-source Hugging Face models.
Embeddings API: Provide unified text-to-vector embedding generation with multi-vendor support and analytics.
Realtime streaming: Stream completions token-by-token for low-latency, interactive experiences and live analytics.
The following reference tables detail feature availability across supported LLM providers when used with the AI Proxy Advanced plugin.
The AI Proxy Advanced plugin will mediate the following for you:
Request and response formats appropriate for the configured config.targets.model.provider and config.targets.route_type
The following service request coordinates (unless the model is self-hosted):
Protocol
Host name
Port
Path
HTTP method
Authentication on behalf of the Kong API consumer
Decorating the request with parameters from the config.targets.model.options block, appropriate for the chosen provider
Recording of usage statistics of the configured LLM provider and model into your selected Kong log plugin output
Optionally, additionally recording all post-transformation request and response messages from users, to and from the configured LLM
Fulfillment of requests to self-hosted models, based on select supported format transformations
Flattening all of the provider formats allows you to standardize the manipulation of the data before and after transmission. It also allows your to provide a choice of LLMs to the Kong Gateway Consumers, using consistent request and response formats, regardless of the backend provider or model.
v3.11+ AI Proxy Advanced supports REST-based full-text responses, including RESTful endpoints such as llm/v1/responses, llm/v1/files, llm/v1/assisstants and llm/v1/batches. RESTful endpoints support CRUD operations— you can POST to create a response, GET to retrieve it, or DELETE to remove it.
The plugin’s route_type should be set based on the target upstream endpoint and model, based on this capability matrix:
The following requirements are enforced by upstream providers:
For Azure Responses API, set config.azure_api_version to "preview".
For OpenAI and Azure Realtime APIs, include the header OpenAI-Beta: realtime=v1.
Only WebSocket is supported—WebRTC is not supported.
For OpenAI and Azure Assistant APIs, include the header OpenAI-Beta: assistants=v2.
For requests with large payloads (e.g., image edits, audio transcription/translation), consider increasing config.max_request_body_size to three times the raw binary size.
To use the realtime/v1/realtime route, users must configure the protocols to ws and/or wss on both the service and on the route where the plugin is associated.
As defined in config.targets.model.options.upstream_url
Llama2
As defined in config.targets.model.options.upstream_url
Amazon Bedrock
https://bedrock-runtime.{region}.amazonaws.com
Gemini
https://generativelanguage.googleapis.com
Hugging Face
https://api-inference.huggingface.co
While only the Llama2 and Mistral models are classed as self-hosted, the target URL can be overridden for any of the supported providers.
For example, a self-hosted or otherwise OpenAI-compatible endpoint can be called by setting the same config.targets.model.options.upstream_url plugin option.
v3.11+ If you are using each provider’s native SDK, Kong Gateway allows you to transparently proxy the request without any transformation and return the response unmodified. This can be done by setting config.llm_format to a value other than openai, such as gemini or bedrock. See the section below for more details.
In this mode, Kong Gateway will still provide useful analytics, logging, and cost calculation.
v3.10+ By default, Kong Gateway uses the OpenAI format, but you can customize this using config.llm_format. If llm_format is not set to openai, the plugin will not transform the request when sending it upstream and will leave it as-is.
The Kong Gateway AI Proxy accepts the following inputs formats, standardized across all providers. The config.targets.route_type must be configured respective to the required request and response format examples.
The following examples show standardized text-based request formats for each supported llm/v1/* route. These formats are normalized across providers to help simplify downstream parsing and integration.
{"messages":[{"role":"system","content":"You are a scientist."},{"role":"user","content":"What is the theory of relativity?"}]}
Copied to clipboard!
v3.9+ With Amazon Bedrock, you can include your guardrail configuration in the request:
{"messages":[{"role":"system","content":"You are a scientist."},{"role":"user","content":"What is the theory of relativity?"}],"guardrailConfig":{"guardrailIdentifier":"<guardrail_identifier>","guardrailVersion":"1","trace":"enabled"}}
Copied to clipboard!
{"prompt":"You are a scientist. What is the theory of relativity?"}
Copied to clipboard!
Supported in: v3.11+
{"input":"The food was delicious and the waiter...","model":"text-embedding-ada-002","encoding_format":"float"}
{"instructions":"You are a personal math tutor. When asked a question, write and run Python code to answer the question.","name":"Math Tutor","tools":[{"type":"code_interpreter"}],"model":"gpt-4o"}
The following examples show standardized audio and image request formats for each supported route. These formats are normalized across providers to help simplify downstream parsing and integration.
Supported in: v3.11+
curlhttp://localhost:8000\-H"Authorization: Bearer $OPENAI_API_KEY"\-H"Content-Type: application/json"\-d'{"input":"The quick brown fox jumped over the lazy dog.","voice":"alloy"}'\--outputspeech.mp3
{"choices":[{"finish_reason":"stop","index":0,"message":{"content":"The theory of relativity is a...","role":"assistant"}}],"created":1707769597,"id":"chatcmpl-ID","model":"gpt-4-0613","object":"chat.completion","usage":{"completion_tokens":5,"prompt_tokens":26,"total_tokens":31}}
Copied to clipboard!
{"choices":[{"finish_reason":"stop","index":0,"text":"The theory of relativity is a..."}],"created":1707769597,"id":"cmpl-ID","model":"gpt-3.5-turbo-instruct","object":"text_completion","usage":{"completion_tokens":10,"prompt_tokens":7,"total_tokens":17}}
{"id":"asst_abc123","object":"assistant","created_at":1698984975,"name":"Math Tutor","description":null,"model":"gpt-4o","instructions":"You are a personal math tutor. When asked a question, write and run Python code to answer the question.","tools":[{"type":"code_interpreter"}],"metadata":{},"top_p":1.0,"temperature":1.0,"response_format":"auto"}
Copied to clipboard!
Supported in: v3.11+
{"id":"resp_67ccd2bed1ec8190b14f964abc0542670bb6a6b452d3795b","object":"response","created_at":1741476542,"status":"completed","error":null,"incomplete_details":null,"instructions":null,"max_output_tokens":null,"model":"gpt-4.1-2025-04-14","output":[{"type":"message","id":"msg_67ccd2bf17f0819081ff3bb2cf6508e60bb6a6b452d3795b","status":"completed","role":"assistant","content":[{"type":"output_text","text":"In a peaceful grove beneath a silver moon, a unicorn named Lumina discovered a hidden pool that reflected the stars. As she dipped her horn into the water, the pool began to shimmer, revealing a pathway to a magical realm of endless night skies. Filled with wonder, Lumina whispered a wish for all who dream to find their own hidden magic, and as she glanced back, her hoofprints sparkled like stardust.","annotations":[]}]}],"parallel_tool_calls":true,"previous_response_id":null,"reasoning":{"effort":null,"summary":null},"store":true,"temperature":1.0,"text":{"format":{"type":"text"}},"tool_choice":"auto","tools":[],"top_p":1.0,"truncation":"disabled","usage":{"input_tokens":36,"input_tokens_details":{"cached_tokens":0},"output_tokens":87,"output_tokens_details":{"reasoning_tokens":0},"total_tokens":123},"user":null,"metadata":{}}
The following examples show standardized response formats returned by supported audio/ and image/ routes. These formats are normalized across providers to support consistent multimodal output parsing.
Supported in: v3.11+
The response contains the audio file content of speech.mp3.
Supported in: v3.11+
{"text":"Imagine the wildest idea that you've ever had, and you're curious about how it might scale to something that's a 100 or a 1,000 times bigger. This is a place where you can get to do that.","usage":{"type":"tokens","input_tokens":14,"input_token_details":{"text_tokens":0,"audio_tokens":14},"output_tokens":45,"total_tokens":59}}
Copied to clipboard!
Supported in: v3.11+
{"text":"Hello, my name is Wolfgang and I come from Germany. Where are you heading today?"}
This plugin supports several load-balancing algorithms, similar to those used for Kong upstreams, allowing efficient distribution of requests across different AI models. The supported algorithms include:
The consistent-hashing algorithm routes requests based on a specified header value (X-Hashing-Header). Requests with the same header are repeatedly routed to the same model, enabling sticky sessions for maintaining context or affinity across user interactions.
The lowest-usage algorithm in AI Proxy Advanced is based on the volume of usage for each model. It balances the load by distributing requests to models with the lowest usage, measured by factors such as:
The priority algorithm routes requests to groups of models based on assigned weights. Higher-weighted groups are preferred, and if all models in a group fail, the plugin falls back to the next group. This allows for reliable failover and cost-aware routing across multiple AI models.
The round-robin algorithm distributes requests across models based on their respective weights. For example, if your models gpt-4, gpt-4o-mini, and gpt-3 have weights of 70, 25, and 5 respectively, they’ll receive approximately 70%, 25%, and 5% of the traffic in turn. Requests are distributed proportionally, independent of usage or latency metrics.
The semantic algorithm distributes requests to different models based on the similarity between the prompt in the request and the description provided in the model configuration. This allows Kong to automatically select the model that is best suited for the given domain or use case. This feature enhances the flexibility and efficiency of model selection, especially when dealing with a diverse range of AI providers and models.
The load balancer has customizable retries and timeouts for requests, and can redirect a request to a different model in case of failure. This allows you to have a fallback in case one of your targets is unavailable.
For versions v3.10+ this plugin supports fallback across targets with any supported formats.
For versions earlier than 3.10, fallback is not supported across targets with different formats. You can still use multiple providers, but only if the formats are compatible.
For example, load balancers with the following target combinations are supported:
Different OpenAI models
OpenAI models and Mistral models with the OpenAI format
Mistral models with the OLLAMA format and Llama models with the OLLAMA format
Some errors, such as client errors, result in a failure and don’t failover to another target.
The plugin’s route_type should be set based on the target upstream endpoint and model, based on this capability matrix:
The following requirements are enforced by upstream providers:
For Azure Responses API, set config.azure_api_version to "preview".
For OpenAI and Azure Realtime APIs, include the header OpenAI-Beta: realtime=v1.
Only WebSocket is supported—WebRTC is not supported.
For OpenAI and Azure Assistant APIs, include the header OpenAI-Beta: assistants=v2.
For requests with large payloads (e.g., image edits, audio transcription/translation), consider increasing config.max_request_body_size to three times the raw binary size.
To use the realtime/v1/realtime route, users must configure the protocols to ws and/or wss on both the service and on the route where the plugin is associated.
As defined in config.targets.model.options.upstream_url
Llama2
As defined in config.targets.model.options.upstream_url
Amazon Bedrock
https://bedrock-runtime.{region}.amazonaws.com
Gemini
https://generativelanguage.googleapis.com
Hugging Face
https://api-inference.huggingface.co
While only the Llama2 and Mistral models are classed as self-hosted, the target URL can be overridden for any of the supported providers.
For example, a self-hosted or otherwise OpenAI-compatible endpoint can be called by setting the same config.targets.model.options.upstream_url plugin option.
v3.11+ If you are using each provider’s native SDK, Kong Gateway allows you to transparently proxy the request without any transformation and return the response unmodified. This can be done by setting config.llm_format to a value other than openai, such as gemini or bedrock. See the section below for more details.
In this mode, Kong Gateway will still provide useful analytics, logging, and cost calculation.
v3.10+ By default, Kong Gateway uses the OpenAI format, but you can customize this using config.llm_format. If llm_format is not set to openai, the plugin will not transform the request when sending it upstream and will leave it as-is.
The Kong Gateway AI Proxy accepts the following inputs formats, standardized across all providers. The config.targets.route_type must be configured respective to the required request and response format examples.
The following examples show standardized text-based request formats for each supported llm/v1/* route. These formats are normalized across providers to help simplify downstream parsing and integration.
{"messages":[{"role":"system","content":"You are a scientist."},{"role":"user","content":"What is the theory of relativity?"}]}
Copied to clipboard!
v3.9+ With Amazon Bedrock, you can include your guardrail configuration in the request:
{"messages":[{"role":"system","content":"You are a scientist."},{"role":"user","content":"What is the theory of relativity?"}],"guardrailConfig":{"guardrailIdentifier":"<guardrail_identifier>","guardrailVersion":"1","trace":"enabled"}}
Copied to clipboard!
{"prompt":"You are a scientist. What is the theory of relativity?"}
Copied to clipboard!
Supported in: v3.11+
{"input":"The food was delicious and the waiter...","model":"text-embedding-ada-002","encoding_format":"float"}
{"instructions":"You are a personal math tutor. When asked a question, write and run Python code to answer the question.","name":"Math Tutor","tools":[{"type":"code_interpreter"}],"model":"gpt-4o"}
The following examples show standardized audio and image request formats for each supported route. These formats are normalized across providers to help simplify downstream parsing and integration.
Supported in: v3.11+
curlhttp://localhost:8000\-H"Authorization: Bearer $OPENAI_API_KEY"\-H"Content-Type: application/json"\-d'{"input":"The quick brown fox jumped over the lazy dog.","voice":"alloy"}'\--outputspeech.mp3
{"choices":[{"finish_reason":"stop","index":0,"message":{"content":"The theory of relativity is a...","role":"assistant"}}],"created":1707769597,"id":"chatcmpl-ID","model":"gpt-4-0613","object":"chat.completion","usage":{"completion_tokens":5,"prompt_tokens":26,"total_tokens":31}}
Copied to clipboard!
{"choices":[{"finish_reason":"stop","index":0,"text":"The theory of relativity is a..."}],"created":1707769597,"id":"cmpl-ID","model":"gpt-3.5-turbo-instruct","object":"text_completion","usage":{"completion_tokens":10,"prompt_tokens":7,"total_tokens":17}}
{"id":"asst_abc123","object":"assistant","created_at":1698984975,"name":"Math Tutor","description":null,"model":"gpt-4o","instructions":"You are a personal math tutor. When asked a question, write and run Python code to answer the question.","tools":[{"type":"code_interpreter"}],"metadata":{},"top_p":1.0,"temperature":1.0,"response_format":"auto"}
Copied to clipboard!
Supported in: v3.11+
{"id":"resp_67ccd2bed1ec8190b14f964abc0542670bb6a6b452d3795b","object":"response","created_at":1741476542,"status":"completed","error":null,"incomplete_details":null,"instructions":null,"max_output_tokens":null,"model":"gpt-4.1-2025-04-14","output":[{"type":"message","id":"msg_67ccd2bf17f0819081ff3bb2cf6508e60bb6a6b452d3795b","status":"completed","role":"assistant","content":[{"type":"output_text","text":"In a peaceful grove beneath a silver moon, a unicorn named Lumina discovered a hidden pool that reflected the stars. As she dipped her horn into the water, the pool began to shimmer, revealing a pathway to a magical realm of endless night skies. Filled with wonder, Lumina whispered a wish for all who dream to find their own hidden magic, and as she glanced back, her hoofprints sparkled like stardust.","annotations":[]}]}],"parallel_tool_calls":true,"previous_response_id":null,"reasoning":{"effort":null,"summary":null},"store":true,"temperature":1.0,"text":{"format":{"type":"text"}},"tool_choice":"auto","tools":[],"top_p":1.0,"truncation":"disabled","usage":{"input_tokens":36,"input_tokens_details":{"cached_tokens":0},"output_tokens":87,"output_tokens_details":{"reasoning_tokens":0},"total_tokens":123},"user":null,"metadata":{}}
The following examples show standardized response formats returned by supported audio/ and image/ routes. These formats are normalized across providers to support consistent multimodal output parsing.
Supported in: v3.11+
The response contains the audio file content of speech.mp3.
Supported in: v3.11+
{"text":"Imagine the wildest idea that you've ever had, and you're curious about how it might scale to something that's a 100 or a 1,000 times bigger. This is a place where you can get to do that.","usage":{"type":"tokens","input_tokens":14,"input_token_details":{"text_tokens":0,"audio_tokens":14},"output_tokens":45,"total_tokens":59}}
Copied to clipboard!
Supported in: v3.11+
{"text":"Hello, my name is Wolfgang and I come from Germany. Where are you heading today?"}
$(headers.header_name): The value of a specific request header.
$(uri_captures.path_parameter_name): The value of a captured URI path parameter.
$(query_params.query_parameter_name): The value of a query string parameter.
You can combine these parameters with an OpenAI-compatible SDK in multiple ways using the AI Proxy and AI Proxy Advanced plugins, depending on your specific use case:
Yes, if Kong Gateway is running on Azure, AI Proxy Advanced can detect the designated Managed Identity or User-Assigned Identity of that Azure Compute resource, and use it accordingly.
In your AI Proxy Advanced configuration, set the following parameters: