Skip to content
Kong Docs are moving soon! Our docs are migrating to a new home. You'll be automatically redirected to the new site in the future. In the meantime, view this page on the new site!
Kong Logo | Kong Docs Logo
  • Docs
    • Explore the API Specs
      View all API Specs View all API Specs View all API Specs arrow image
    • Documentation
      API Specs
      Kong Gateway
      Lightweight, fast, and flexible cloud-native API gateway
      Kong Konnect
      Single platform for SaaS end-to-end connectivity
      Kong AI Gateway
      Multi-LLM AI Gateway for GenAI infrastructure
      Kong Mesh
      Enterprise service mesh based on Kuma and Envoy
      decK
      Helps manage Kong’s configuration in a declarative fashion
      Kong Ingress Controller
      Works inside a Kubernetes cluster and configures Kong to proxy traffic
      Kong Gateway Operator
      Manage your Kong deployments on Kubernetes using YAML Manifests
      Insomnia
      Collaborative API development platform
  • Plugin Hub
    • Explore the Plugin Hub
      View all plugins View all plugins View all plugins arrow image
    • Functionality View all View all arrow image
      View all plugins
      AI's icon
      AI
      Govern, secure, and control AI traffic with multi-LLM AI Gateway plugins
      Authentication's icon
      Authentication
      Protect your services with an authentication layer
      Security's icon
      Security
      Protect your services with additional security layer
      Traffic Control's icon
      Traffic Control
      Manage, throttle and restrict inbound and outbound API traffic
      Serverless's icon
      Serverless
      Invoke serverless functions in combination with other plugins
      Analytics & Monitoring's icon
      Analytics & Monitoring
      Visualize, inspect and monitor APIs and microservices traffic
      Transformations's icon
      Transformations
      Transform request and responses on the fly on Kong
      Logging's icon
      Logging
      Log request and response data using the best transport for your infrastructure
  • Support
  • Community
  • Kong Academy
Get a Demo Start Free Trial
Kong Gateway
3.10.x (latest)
  • Home icon
  • Kong Gateway
  • Get Started
  • Get started with AI Gateway
github-edit-pageEdit this page
report-issueReport an issue
  • Kong Gateway
  • Kong Konnect
  • Kong Mesh
  • Kong AI Gateway
  • Plugin Hub
  • decK
  • Kong Ingress Controller
  • Kong Gateway Operator
  • Insomnia
  • Kuma

  • Docs contribution guidelines
  • 3.10.x (latest)
  • 3.9.x
  • 3.8.x
  • 3.7.x
  • 3.6.x
  • 3.5.x
  • 3.4.x (LTS)
  • 3.3.x
  • 2.8.x (LTS)
  • Archive (3.0.x and pre-2.8.x)
  • Introduction
    • Overview of Kong Gateway
    • Support
      • Version Support Policy
      • Third Party Dependencies
      • Browser Support
      • Vulnerability Patching Process
      • Software Bill of Materials
    • Stability
    • Release Notes
    • Breaking Changes
      • Kong Gateway 3.10.x
      • Kong Gateway 3.9.x
      • Kong Gateway 3.8.x
      • Kong Gateway 3.7.x
      • Kong Gateway 3.6.x
      • Kong Gateway 3.5.x
      • Kong Gateway 3.4.x
      • Kong Gateway 3.3.x
      • Kong Gateway 3.2.x
      • Kong Gateway 3.1.x
      • Kong Gateway 3.0.x
      • Kong Gateway 2.8.x or earlier
    • Key Concepts
      • Services
      • Routes
      • Consumers
      • Upstreams
      • Plugins
      • Consumer Groups
    • How Kong Works
      • Routing Traffic
      • Load Balancing
      • Health Checks and Circuit Breakers
    • Glossary
  • Get Started with Kong
    • Get Kong
    • Services and Routes
    • Rate Limiting
    • Proxy Caching
    • Key Authentication
    • Load-Balancing
  • Install Kong
    • Overview
    • Kubernetes
      • Overview
      • Install Kong Gateway
      • Configure the Admin API
      • Install Kong Manager
    • Docker
      • Using docker run
      • Build your own Docker images
    • Linux
      • Amazon Linux
      • Debian
      • Red Hat
      • Ubuntu
    • Post-installation
      • Set up a data store
      • Apply Enterprise license
      • Enable Kong Manager
  • Kong in Production
    • Deployment Topologies
      • Overview
      • Kubernetes Topologies
      • Hybrid Mode
        • Overview
        • Deploy Kong Gateway in Hybrid mode
        • Incremental Configuration Sync
      • DB-less Deployment
      • Traditional
    • Running Kong
      • Running Kong as a non-root user
      • Securing the Admin API
      • Using systemd
    • Access Control
      • Start Kong Gateway Securely
      • Programatically Creating Admins
      • Enabling RBAC
      • Workspaces
    • Licenses
      • Overview
      • Download your License
      • Deploy Enterprise License
      • Using the License API
      • Monitor Licenses Usage
    • Networking
      • Default Ports
      • DNS Considerations
      • Network and Firewall
      • CP/DP Communication through a Forward Proxy
      • PostgreSQL TLS
        • Configure PostgreSQL TLS
        • Troubleshooting PostgreSQL TLS
    • Kong Configuration File
    • Environment Variables
    • Serving a Website and APIs from Kong
    • Secrets Management
      • Overview
      • Getting Started
      • Secrets Rotation
      • Advanced Usage
      • Backends
        • Overview
        • Environment Variables
        • AWS Secrets Manager
        • Azure Key Vaults
        • Google Cloud Secret Manager
        • HashiCorp Vault
      • How-To
        • Securing the Database with AWS Secrets Manager
      • Reference Format
    • Keyring and Data Encryption
    • Monitoring
      • Overview
      • Prometheus
      • StatsD
      • Datadog
      • Health Check Probes
      • Expose and graph AI Metrics
    • Tracing
      • Overview
      • Writing a Custom Trace Exporter
      • Tracing API Reference
    • Resource Sizing Guidelines
    • Blue-Green Deployments
    • Canary Deployments
    • Clustering Reference
    • Performance
      • Performance Testing Benchmarks
      • Establish a Performance Benchmark
      • Improve performance with Brotli compression
    • Logging and Debugging
      • Log Reference
      • Dynamic log level updates
      • Customize Gateway Logs
      • Debug Requests
      • AI Gateway Analytics
      • Audit Logging
    • Configure a gRPC service
    • Use the Expressions Router
    • Outage Handling
      • Configure Data Plane Resilience
      • About Control Plane Outage Management
    • Upgrade and Migration
      • Upgrading Kong Gateway 3.x.x
      • Backup and Restore
      • Upgrade Strategies
        • Dual-Cluster Upgrade
        • In-Place Upgrade
        • Blue-Green Upgrade
        • Rolling Upgrade
      • Upgrade from 2.8 LTS to 3.4 LTS
      • Migrate from OSS to Enterprise
      • Migration Guidelines Cassandra to PostgreSQL
      • Migrate to the new DNS client
      • Breaking Changes
    • FIPS 140-2
      • Overview
      • Install the FIPS Compliant Package
    • Authenticate your Kong Gateway Amazon RDS database with AWS IAM
    • Verify Signatures for Signed Kong Images
    • Verify Build Provenance for Signed Kong Images
  • Kong AI Gateway
    • Overview
    • Get started with AI Gateway
    • LLM Provider Integration Guides
      • OpenAI
      • Cohere
      • Azure
      • Anthropic
      • Mistral
      • Llama2
      • Vertex/Gemini
      • Amazon Bedrock
    • LLM Library Integration Guides
      • LangChain
    • AI Gateway Analytics
    • Expose and graph AI Metrics
    • AI Gateway Load Balancing
    • AI Gateway plugins
  • Kong Manager
    • Overview
    • Enable Kong Manager
    • Get Started with Kong Manager
      • Services and Routes
      • Rate Limiting
      • Proxy Caching
      • Authentication with Consumers
      • Load Balancing
    • Authentication and Authorization
      • Overview
      • Create a Super Admin
      • Workspaces and Teams
      • Reset Passwords and RBAC Tokens
      • Basic Auth
      • LDAP
        • Configure LDAP
        • LDAP Service Directory Mapping
      • OIDC
        • Configure OIDC
        • OIDC Authenticated Group Mapping
        • Migrate from previous configurations
      • Sessions
      • RBAC
        • Overview
        • Enable RBAC
        • Add a Role and Permissions
        • Create a User
        • Create an Admin
    • Networking Configuration
    • Workspaces
    • Create Consumer Groups
    • Sending Email
    • Troubleshoot
    • Strengthen Security
  • Develop Custom Plugins
    • Overview
    • Getting Started
      • Introduction
      • Set up the Plugin Project
      • Add Plugin Testing
      • Add Plugin Configuration
      • Consume External Services
      • Deploy Plugins
    • File Structure
    • Implementing Custom Logic
    • Plugin Configuration
    • Accessing the Data Store
    • Storing Custom Entities
    • Caching Custom Entities
    • Extending the Admin API
    • Writing Tests
    • Installation and Distribution
    • Proxy-Wasm Filters
      • Create a Proxy-Wasm Filter
      • Proxy-Wasm Filter Configuration
    • Plugin Development Kit
      • Overview
      • kong.client
      • kong.client.tls
      • kong.cluster
      • kong.ctx
      • kong.ip
      • kong.jwe
      • kong.log
      • kong.nginx
      • kong.node
      • kong.plugin
      • kong.request
      • kong.response
      • kong.router
      • kong.service
      • kong.service.request
      • kong.service.response
      • kong.table
      • kong.telemetry.log
      • kong.tracing
      • kong.vault
      • kong.websocket.client
      • kong.websocket.upstream
    • Plugins in Other Languages
      • Go
      • Javascript
      • Python
      • Running Plugins in Containers
      • External Plugin Performance
  • Kong Plugins
    • Overview
    • Authentication Reference
    • Allow Multiple Authentication Plugins
    • Plugin Queuing
      • Overview
      • Plugin Queuing Reference
    • Dynamic Plugin Ordering
      • Overview
      • Get Started with Dynamic Plugin Ordering
    • Redis Partials
    • Datakit
      • Overview
      • Get Started with Datakit
      • Datakit Configuration Reference
      • Datakit Examples Reference
  • Admin API
    • Overview
    • Declarative Configuration
    • Enterprise API
      • Information Routes
      • Health Routes
      • Tags
      • Debug Routes
      • Services
      • Routes
      • Consumers
      • Plugins
      • Certificates
      • CA Certificates
      • SNIs
      • Upstreams
      • Targets
      • Vaults
      • Keys
      • Filter Chains
      • Licenses
      • Workspaces
      • RBAC
      • Admins
      • Consumer Groups
      • Event Hooks
      • Keyring and Data Encryption
      • Audit Logs
      • Status API
  • Reference
    • kong.conf
    • Injecting Nginx Directives
    • CLI
    • Key Management
    • The Expressions Language
      • Overview
      • Language References
      • Performance Optimizations
    • Rate Limiting Library
    • WebAssembly
    • Event Hooks
    • FAQ
On this pageOn this page
  • Prerequisites
  • Set up AI Gateway
    • 1. Create an ingress route
    • 2. Install the AI Proxy plugin
    • 3. Validate the connection to the LLM
  • Installing other AI plugins

Get started with AI Gateway

With Kong’s AI Gateway, you can deploy AI infrastructure for traffic being sent to one or more LLMs, which lets you semantically route, secure, observe, accelerate, and govern using a special set of AI plugins that are bundled with Kong Gateway distributions.

Kong AI Gateway is a set of AI plugins, which can be used by installing Kong Gateway and then by following the documented configuration instructions for each plugin. The AI plugins are supported in all deployment modes, including Konnect, self-hosted traditional, hybrid, and DB-less, and on Kubernetes via the Kong Ingress Controller.

AI plugins are fully supported by Konnect in both hybrid mode, and as a fully managed service.

You can enable most Kong Gateway AI capabilities with one of the following plugins:

  • AI Proxy: The open source AI proxy plugin.
  • AI Proxy Advanced: The enterprise version offering more advanced load balancing, routing, and retries.

These plugins enable upstream connectivity to the LLMs and direct Kong Gateway to proxy traffic to the intended LLM models. Once these plugins are installed and your AI traffic is being proxied, you can use any other Kong Gateway plugin to add more enhanced capabilities.

The main difference between simply adding an LLM’s API behind Kong Gateway and using the AI plugins, is that with the former, you can only interact at the API level with internal traffic. With AI plugins, Kong can understand the prompts that are being sent through the Gateway. The plugins can introspect the body and provide more specific AI capabilities to your traffic, beyond treating the LLMs as “just APIs”.

Prerequisites

Run Kong Gateway in Konnect, or use your distribution of choice:

  • The easiest way to get started is to run Kong Gateway for free on Konnect
  • To run Kong Gateway locally, use the quickstart script, or see all installation options

Set up AI Gateway

1. Create an ingress route

Create a service and a route to define the ingress route to consume your LLMs.

Kong Gateway Admin API
Konnect API
decK (YAML)

Create a Gateway service:

curl -i -X POST http://localhost:8001/services \
  --data name="llm_service" \
  --data url="http://fake.host.internal"

Then, create a route for the service:

curl -i -X POST http://localhost:8001/services/llm_service/routes \
  --data name="openai-llm" \
  --data paths="/openai"

Create a Gateway service:

curl -i -X POST \
https://{us|eu}.api.konghq.com/v2/control-planes/{controlPlaneId}/core-entities/services/ \
  --header "accept: application/json" \
  --header "Content-Type: application/json" \
  --header "Authorization: Bearer TOKEN" \
  --data '
    {
      "name": "llm_service",
      "url": "http://fake.host.internal"
    }

Then, create a route for the service:

curl -i -X POST \
https://{us|eu}.api.konghq.com/v2/control-planes/{controlPlaneId}/core-entities/services/{serviceID}/routes \
  --header "accept: application/json" \
  --header "Content-Type: application/json" \
  --header "Authorization: Bearer TOKEN" \
  --data  '
    {
      "name": "openai-llm",
      "paths": [
        {
          "/openai"
        }
      ]
    }

Take note of the route ID in the response.

Add a service and a route to your decK state file:

services:
  - name: llm_service
    url: http://fake.host.internal
routes:
  - name: openai-llm
    paths:
    - "/openai"
    service:
      name: llm_service

Adding this route entity creates an ingress rule on the /openai path.

These examples use the fake URL http://fake.host.internal as the upstream URL for the service - but you don’t need to replace it with a real one. This is because the service upstream URL won’t really matter, because after installing the AI Proxy plugin (or AI Proxy Advanced), the upstream proxying destination will be determined dynamically based on your AI Proxy plugin configuration.

2. Install the AI Proxy plugin

Configure your destination LLM using either AI Proxy or AI Proxy Advanced so that all traffic sent to your route is redirected to the correct LLM.

This example uses the AI Proxy plugin.

Kong Gateway Admin API
Konnect API
decK (YAML)
curl -i -X POST http://localhost:8001/routes/openai-llm/plugins \
  --header "accept: application/json" \
  --header "Content-Type: application/json" \
  --data '
  {
    "name": "ai-proxy",
    "config": {
      "route_type": "llm/v1/chat",
      "model": {
        "provider": "openai"
      }
    }
  }'

Replace {route_id} with the ID of the route created in the previous step:

curl -X POST \
https://{us|eu}.api.konghq.com/v2/control-planes/{controlPlaneId}/core-entities/routes/{routeId}/plugins \
  --header "accept: application/json" \
  --header "Content-Type: application/json" \
  --header "Authorization: Bearer TOKEN" \
  --data '
  {
    "name": "ai-proxy",
    "config": {
      "route_type": "llm/v1/chat",
      "model": {
        "provider": "openai"
      }
    }
  }'

Add an AI Proxy plugin entry to your decK state file:

plugins:
  - name: ai-proxy
    route: openai-llm
    config:
      route_type: "llm/v1/chat"
      model:
        provider: openai

In this simple example, we are allowing the client to consume all models in the openai provider. You can restrict the models that can be consumed by specifying the model name explicitly using the config.model.name parameter, and manage the LLM credentials in Kong Gateway itself so that the client doesn’t have to send them.

3. Validate the connection to the LLM

Make your first request to OpenAI via Kong Gateway:

curl --http1.1 http://localhost:8000/openai \
  --header "Content-Type: application/json" \
  --header "Authorization: Bearer $OPENAI_API_KEY" \
  --data '{
     "model": "gpt-4o-mini",
     "messages": [{"role": "user", "content": "Say this is a test!"}]
   }'

The response body should contain the response This is a test!:

{
  "id": "chatcmpl-AIm1TMhTkcH1sf67GYXIM5fsfu94X9Gdk",
  "object": "chat.completion",
  "created": 1729037867,
  "model": "gpt-4o-mini-2024-07-18",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "This is a test! How can I assist you today?",
        "refusal": null
      },
      "logprobs": null,
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 13,
    "completion_tokens": 12,
    "total_tokens": 25,
    "prompt_tokens_details": {
      "cached_tokens": 0
    },
    "completion_tokens_details": {
      "reasoning_tokens": 0
    }
  },
  "system_fingerprint": "fp_r0bdr52e6e"
}

Now, your traffic is being properly proxied to OpenAI via Kong Gateway and the AI Proxy plugin.

Installing other AI plugins

The AI Proxy and AI Proxy Advanced plugins are able to understand the incoming OpenAI protocol. This allows you to:

  • Route to all supported LLMs, even the ones that don’t natively support the OpenAI specification, as Kong will automatically transform the request. Write once, and use all LLMs.
  • Extract observability metrics for AI.
  • Cache traffic using the AI Semantic cache plugin plugin.
  • Secure traffic with the AI Prompt Guard and AI Semantic Prompt Guard plugins.
  • Provide prompt templates with AI Prompt template.
  • Programmatically inject system or assistant prompts to all incoming prompts with the AI Prompt Decorator.

See all the AI plugins for more capabilities.

For example, you can rate limit AI traffic based on the number of tokens that are being sent (as opposed to the number of API requests) using the AI Rate Limiting Advanced plugin:

Kong Gateway Admin API
Konnect API
decK (YAML)
curl -i -X POST http://localhost:8001/services/llm_service/plugins \
    --header "accept: application/json" \
    --header "Content-Type: application/json" \
    --data '
    {
    "name": "ai-rate-limiting-advanced",
    "config": {
      "llm_providers": [
        {
          "name": "openai",
          "limit": 5,
          "window_size": 60
      }
    ]
  }
}'
curl -X POST \
https://{us|eu}.api.konghq.com/v2/control-planes/{controlPlaneId}/core-entities/services/{serviceId}/plugins \
    --header "accept: application/json" \
    --header "Content-Type: application/json" \
    --data '
    {
    "name": "ai-rate-limiting-advanced",
    "config": {
      "llm_providers": [
        {
          "name": "openai",
          "limit": 5,
          "window_size": 60
      }
    ]
  }
}'
plugins:
  - name: ai-rate-limiting-advanced
    service: llm_service
    config:
      llm_providers:
      - name: openai
        limit: 5
        window_size: 60

Every other Kong Gateway plugin can also be used in addition to the AI plugins, for advanced access control, authorization and authentication, security, observability, and more.

Thank you for your feedback.
Was this page useful?
Too much on your plate? close cta icon
More features, less infrastructure with Kong Konnect. 1M requests per month for free.
Try it for Free
  • Kong
    Powering the API world

    Increase developer productivity, security, and performance at scale with the unified platform for API management, service mesh, and ingress controller.

    • Products
      • Kong Konnect
      • Kong Gateway Enterprise
      • Kong Gateway
      • Kong Mesh
      • Kong Ingress Controller
      • Kong Insomnia
      • Product Updates
      • Get Started
    • Documentation
      • Kong Konnect Docs
      • Kong Gateway Docs
      • Kong Mesh Docs
      • Kong Insomnia Docs
      • Kong Konnect Plugin Hub
    • Open Source
      • Kong Gateway
      • Kuma
      • Insomnia
      • Kong Community
    • Company
      • About Kong
      • Customers
      • Careers
      • Press
      • Events
      • Contact
  • Terms• Privacy• Trust and Compliance
© Kong Inc. 2025