Related Documentation
Made by
Kong Inc.
Supported Gateway Topologies
hybrid db-less traditional
Supported Konnect Deployments
hybrid cloud-gateways serverless
Compatible Protocols
grpc grpcs http https ws wss
Minimum Version
Kong Gateway - 3.6
Tags
#ai

The AI Proxy plugin lets you transform and proxy requests to a number of AI providers and models.

AI Proxy plugin accepts requests in one of a few defined and standardized formats, translates them to the configured target format, and then transforms the response back into a standard format.

Overview of capabilities

AI Proxy plugin supports capabilities across batch processing, multimodal embeddings, agents, audio, image, streaming, and more, spanning multiple providers:

For Kong Gateway versions 3.6 or earlier:

  • Chat APIs: Multi-turn conversations with system/user/assistant roles.

  • Completions API: Generates free-form text from a prompt.

For Kong Gateway version v3.11+:

  • Batch, assistants, and files APIs: Support parallel LLM calls for efficiency. Assistants enable stateful, tool-augmented agents. Files provide persistent document storage for richer context across sessions.
  • Audio capabilities APIs: Provide speech-to-text transcription, real-time translation, and text-to-speech synthesis for voice agents, multilingual interfaces, and meeting analysis.
  • Image generation and editing APIs: Generate and modify images from text prompts to support multimodal agents with visual input and output.
  • Responses API: Return response metadata for debugging, evaluation, and response tuning.
  • AWS Bedrock agent APIs: Support advanced orchestration and real-time RAG with Converse, ConverseStream, RetrieveAndGenerate, and RetrieveAndGenerateStream.
  • Hugging Face text generation: Enable text generation and streaming using open-source Hugging Face models.
  • Embeddings API: Provide unified text-to-vector embedding generation with multi-vendor support and analytics.

The following reference tables detail feature availability across supported LLM providers when used with the AI Proxy plugin.

Core text generation

Support for chat, completions, and embeddings.

Provider Chat Completions Chat streaming Completions streaming Embeddings
OpenAI (GPT-3.5, GPT-4, GPT-4o, and Multi-Modal)
Cohere
Azure
Anthropic
Mistral (mistral.ai, OpenAI, raw, and OLLAMA formats)
Llama2 (supports Llama2 and Llama3 models and raw, OLLAMA, and OpenAI formats)
Amazon Bedrock
Gemini
Hugging Face

Advanced text generation v3.11+

Support for function calling, tool use, and batch processing.

Provider Files Batches Assistants Responses
OpenAI (GPT-3.5, GPT-4, GPT-4o, and Multi-Modal)
Cohere
Azure
Anthropic
Mistral (mistral.ai, OpenAI, raw, and OLLAMA formats)
Llama2 (supports Llama2 and Llama3 models and raw, OLLAMA, and OpenAI formats)
Amazon Bedrock
Gemini
Hugging Face

Audio features v3.11+

Support for text-to-speech, transcription, and translation.

Provider Audio: Speech Audio: Transcriptions Audio: Translations
OpenAI (GPT-3.5, GPT-4, GPT-4o, and Multi-Modal)
Cohere
Azure
Anthropic
Mistral (mistral.ai, OpenAI, raw, and OLLAMA formats)
Llama2 (supports Llama2 and Llama3 models and raw, OLLAMA, and OpenAI formats)
Amazon Bedrock
Gemini
Hugging Face

Image features v3.11+

Support for image generation, and image editing interaction.

Provider Image: Generations Image: Edits
OpenAI (GPT-3.5, GPT-4, GPT-4o, and Multi-Modal)
Cohere
Azure
Anthropic
Mistral (mistral.ai, OpenAI, raw, and OLLAMA formats)
Llama2 (supports Llama2 and Llama3 models and raw, OLLAMA, and OpenAI formats)
Amazon Bedrock
Gemini
Hugging Face

How it works

The AI Proxy plugin will mediate the following for you:

  • Request and response formats appropriate for the configured config.targets[].model.provider and config.targets.route_type
  • The following service request coordinates (unless the model is self-hosted):
    • Protocol
    • Host name
    • Port
    • Path
    • HTTP method
  • Authentication on behalf of the Kong API consumer
  • Decorating the request with parameters from the config.targets[].model.options block, appropriate for the chosen provider
  • Recording of usage statistics of the configured LLM provider and model into your selected Kong log plugin output
  • Optionally, additionally recording all post-transformation request and response messages from users, to and from the configured LLM
  • Fulfillment of requests to self-hosted models, based on select supported format transformations

Flattening all of the provider formats allows you to standardize the manipulation of the data before and after transmission. It also allows your to provide a choice of LLMs to the Kong Gateway Consumers, using consistent request and response formats, regardless of the backend provider or model.

v3.11+ AI Proxy supports REST-based full-text responses, including RESTful endpoints such as llm/v1/responses, llm/v1/files, llm/v1/assisstants and llm/v1/batches. RESTful endpoints support CRUD operations— you can POST to create a response, GET to retrieve it, or DELETE to remove it.

Request and response formats

The plugin’s route_type should be set based on the target upstream endpoint and model, based on this capability matrix:

The following requirements are enforced by upstream providers:

  • For Azure Responses API, set config.azure_api_version to "preview".
  • For OpenAI and Azure Assistant APIs, include the header OpenAI-Beta: assistants=v2.
  • For requests with large payloads (e.g., image edits, audio transcription/translation), consider increasing config.max_request_body_size to three times the raw binary size.

The following upstream URL patterns are used:

Provider URL
OpenAI https://api.openai.com:443/{route_type_path}
Cohere https://api.cohere.com:443/{route_type_path}
Azure https://{azure_instance}.openai.azure.com:443/openai/deployments/{deployment_name}/{route_type_path}
Anthropic https://api.anthropic.com:443/{route_type_path}
Mistral As defined in config.targets[].model.options.upstream_url
Llama2 As defined in config.targets[].model.options.upstream_url
Amazon Bedrock https://bedrock-runtime.{region}.amazonaws.com
Gemini https://generativelanguage.googleapis.com
Hugging Face https://api-inference.huggingface.co

While only the Llama2 and Mistral models are classed as self-hosted, the target URL can be overridden for any of the supported providers. For example, a self-hosted or otherwise OpenAI-compatible endpoint can be called by setting the same config.targets[].model.options.upstream_url plugin option.

v3.11+ If you are using each provider’s native SDK, Kong Gateway allows you to transparently proxy the request without any transformation and return the response unmodified. This can be done by setting config.llm_format to a value other than openai, such as gemini or bedrock. See the section below for more details.

In this mode, Kong Gateway will still provide useful analytics, logging, and cost calculation.

Input formats

Kong Gateway mediates the request and response format based on the selected config.targets[].model.provider and config.targets.route_type.

v3.10+ By default, Kong Gateway uses the OpenAI format, but you can customize this using config.llm_format. If llm_format is not set to openai, the plugin will not transform the request when sending it upstream and will leave it as-is.

The Kong Gateway AI Proxy accepts the following inputs formats, standardized across all providers. The config.targets.route_type must be configured respective to the required request and response format examples.

Text generation inputs

The following examples show standardized text-based request formats for each supported llm/v1/* route. These formats are normalized across providers to help simplify downstream parsing and integration.

Audio and image generation inputs

The following examples show standardized audio and image request formats for each supported route. These formats are normalized across providers to help simplify downstream parsing and integration.

Response formats

Conversely, the response formats are also transformed to a standard format across all providers:

Text-based responses

Image, and audio responses

The following examples show standardized response formats returned by supported audio/ and image/ routes. These formats are normalized across providers to support consistent multimodal output parsing.

The request and response formats are loosely modeled after OpenAI’s API. For detailed format specifications, see the sample OpenAPI specification.

Supported native LLM formats

Caveats and limitations

The following sections detail the provider and statistic logging limitations.

Provider-specific limitations

  • Anthropic: Does not support llm/v1/completions or llm/v1/embeddings.
  • Llama2: Raw format lacks support for llm/v1/embeddings.
  • Bedrock and Gemini: Only support auth.allow_override = false.

Statistics logging limitations

  • Anthropic: No statistics logging for llm/v1/completions.
  • OpenAI and Azure: No statistics logging for assistants, batch, or audio APIs.
  • Bedrock: No statistics logging for image generation or editing APIs.

Templating v3.7+

The plugin allows you to substitute values in the config.targets[].model.name and any parameter under config.targets[].model.options with specific placeholders, similar to those in the Request Transformer Advanced plugin.

The following templated parameters are available:

  • $(headers.header_name): The value of a specific request header.
  • $(uri_captures.path_parameter_name): The value of a captured URI path parameter.
  • $(query_params.query_parameter_name): The value of a query string parameter.

You can combine these parameters with an OpenAI-compatible SDK in multiple ways using the AI Proxy plugin, depending on your specific use case:

Action

Description

Use chat route with dynamic model selection Configure a chat route that reads the target model from the request path instead of hardcoding it in the configuration.
Use the Azure deployment relevant to a specific model name Configure a header capture to insert the requested model name directly into the plugin configuration for Kong AI Gateway deployment with Azure OpenAI, as a string substitution.
Proxy multiple models deployed in the same Azure instance Configure one route to proxy multiple models deployed in the same Azure instance.

This can be used to OpenAI-compatible SDK with this plugin in multiple ways, depending on the required use case.

FAQs

Did this doc help?

Something wrong?

Help us make these docs great!

Kong Developer docs are open source. If you find these useful and want to make them better, contribute today!