Skip to content
Kong Docs are moving soon! Our docs are migrating to a new home. You'll be automatically redirected to the new site in the future. In the meantime, view this page on the new site!
Kong Logo | Kong Docs Logo
  • Docs
    • Explore the API Specs
      View all API Specs View all API Specs View all API Specs arrow image
    • Documentation
      API Specs
      Kong Gateway
      Lightweight, fast, and flexible cloud-native API gateway
      Kong Konnect
      Single platform for SaaS end-to-end connectivity
      Kong AI Gateway
      Multi-LLM AI Gateway for GenAI infrastructure
      Kong Mesh
      Enterprise service mesh based on Kuma and Envoy
      decK
      Helps manage Kong’s configuration in a declarative fashion
      Kong Ingress Controller
      Works inside a Kubernetes cluster and configures Kong to proxy traffic
      Kong Gateway Operator
      Manage your Kong deployments on Kubernetes using YAML Manifests
      Insomnia
      Collaborative API development platform
  • Plugin Hub
    • Explore the Plugin Hub
      View all plugins View all plugins View all plugins arrow image
    • Functionality View all View all arrow image
      View all plugins
      AI's icon
      AI
      Govern, secure, and control AI traffic with multi-LLM AI Gateway plugins
      Authentication's icon
      Authentication
      Protect your services with an authentication layer
      Security's icon
      Security
      Protect your services with additional security layer
      Traffic Control's icon
      Traffic Control
      Manage, throttle and restrict inbound and outbound API traffic
      Serverless's icon
      Serverless
      Invoke serverless functions in combination with other plugins
      Analytics & Monitoring's icon
      Analytics & Monitoring
      Visualize, inspect and monitor APIs and microservices traffic
      Transformations's icon
      Transformations
      Transform request and responses on the fly on Kong
      Logging's icon
      Logging
      Log request and response data using the best transport for your infrastructure
  • Support
  • Community
  • Kong Academy
Get a Demo Start Free Trial
Kong Ingress Controller
3.4.x (latest) LTS
  • Home icon
  • Kong Ingress Controller
  • Plugins
  • Rate-Limiting
github-edit-pageEdit this page
report-issueReport an issue
  • Kong Gateway
  • Kong Konnect
  • Kong Mesh
  • Kong AI Gateway
  • Plugin Hub
  • decK
  • Kong Ingress Controller
  • Kong Gateway Operator
  • Insomnia
  • Kuma

  • Docs contribution guidelines
  • unreleased
  • 3.4.x (latest) (LTS)
  • 3.3.x
  • 3.2.x
  • 3.1.x
  • 3.0.x
  • 2.12.x (LTS)
  • 2.11.x
  • 2.10.x
  • 2.9.x
  • 2.8.x
  • 2.7.x
  • 2.6.x
  • 2.5.x (LTS)
  • Introduction
    • Overview
    • Kubernetes Gateway API
    • Version Support Policy
    • Changelog
  • How KIC Works
    • Architecture
    • Gateway API
    • Ingress
    • Custom Resources
    • Using Annotations
    • Admission Webhook
  • Get Started
    • Install KIC
    • Services and Routes
    • Rate Limiting
    • Proxy Caching
    • Key Authentication
  • KIC in Production
    • Deployment Topologies
      • Overview
      • Gateway Discovery
      • Database Backed
      • Traditional (sidecar)
    • Installation Methods
      • Helm
      • Kong Gateway Operator
    • Cloud Deployment
      • Azure
      • Amazon
      • Google
    • Enterprise License
    • Observability
      • Prometheus Metrics
      • Configuring Prometheus and Grafana
      • Kubernetes Events
    • Upgrading
      • Kong Gateway
      • Ingress Controller
  • Guides
    • Service Configuration
      • HTTP Service
      • TCP Service
      • UDP Service
      • gRPC Service
      • TLS
      • External Service
      • HTTPS Redirects
      • Multiple Backend Services
      • Configuring Gateway API resources across namespaces
      • Configuring Custom Kong Entities
    • Request Manipulation
      • Rewriting Hosts and Paths
      • Rewrite Annotation
      • Customizing load-balancing behavior
    • High Availability
      • KIC High Availability
      • Service Health Checks
      • Last Known Good Config
      • Fallback Configuration
    • Security
      • Kong Vaults
      • Using Workspaces
      • Preserving Client IP
      • Kubernetes Secrets in Plugins
      • Verifying Upstream TLS
    • Migrate
      • KongIngress to KongUpstreamPolicy
      • Migrating from Ingress to Gateway
      • Credential Type Labels
    • Customize Deployments
      • Images
    • Custom Ingress Class
      • Internal / External Traffic
  • Plugins
    • Custom Plugins
    • Authentication
    • ACL
    • Rate Limiting
    • mTLS
    • OIDC
  • Reference
    • Troubleshooting
    • Version Compatibility
    • Annotations
    • Configuration Options
    • Feature Gates
    • FAQ
      • Plugin Compatibility
      • Kong Router
      • Custom nginx.conf
    • Custom Resource Definitions
    • Resources Requiring Setting Ingress Class
    • Gateway API migration
    • Required Permissions for Installation
    • Categories of Failures
    • Combining Services From Different HTTPRoutes
enterprise-switcher-icon Switch to OSS
On this pageOn this page
  • Prerequisites
    • Install the Gateway APIs
    • Install Kong
    • Test connectivity to Kong
  • Deploy an echo service
  • Add routing configuration
  • Set up rate limiting
  • Scale to multiple pods
  • Deploy Redis to your Kubernetes cluster
  • Test rate limiting is a multi-node Kong deployment
  • (Optional) Use Secrets Management
    • Add environment variable from Secret
    • Update the plugin to use a reference
    • Check plugin configuration

Rate-Limiting

Kong can rate limit traffic without any external dependency. Kong stores the request counters in-memory and each Kong node applies the rate limiting policy independently without synchronization of information. However, if Redis is available in your cluster, Kong can take advantage of it and synchronize the rate limit information across multiple Kong nodes and enforce a slightly different rate limiting policy. You can use Redis for rate limiting in a multi-node Kong deployment.

You can use the Kong Gateway Enterprise Secrets Management feature along with the example rate-limiting plugin. If you have an existing plugin that you wish to use Secrets Management with, you can skip directly to the Secrets Management section and use it for your plugin instead of the rate-limiting plugin.

Prerequisites: Install Kong Ingress Controller with Gateway API support in your Kubernetes cluster and connect to Kong.

Prerequisites

Install the Gateway APIs

  1. Install the Gateway API CRDs before installing Kong Ingress Controller.

     kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.1.0/standard-install.yaml
    
  2. Create a Gateway and GatewayClass instance to use.

    echo "
    ---
    apiVersion: gateway.networking.k8s.io/v1
    kind: GatewayClass
    metadata:
      name: kong
      annotations:
        konghq.com/gatewayclass-unmanaged: 'true'
    
    spec:
      controllerName: konghq.com/kic-gateway-controller
    ---
    apiVersion: gateway.networking.k8s.io/v1
    kind: Gateway
    metadata:
      name: kong
    spec:
      gatewayClassName: kong
      listeners:
      - name: proxy
        port: 80
        protocol: HTTP
        allowedRoutes:
          namespaces:
             from: All
    " | kubectl apply -f -
    

    The results should look like this:

    gatewayclass.gateway.networking.k8s.io/kong created
    gateway.gateway.networking.k8s.io/kong created
    

Install Kong

You can install Kong in your Kubernetes cluster using Helm.

  1. Add the Kong Helm charts:

     helm repo add kong https://charts.konghq.com
     helm repo update
    
  2. Install Kong Ingress Controller and Kong Gateway with Helm:

     helm install kong kong/ingress -n kong --create-namespace 
    

Test connectivity to Kong

Kubernetes exposes the proxy through a Kubernetes service. Run the following commands to store the load balancer IP address in a variable named PROXY_IP:

  1. Populate $PROXY_IP for future commands:

     export PROXY_IP=$(kubectl get svc --namespace kong kong-gateway-proxy -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
     echo $PROXY_IP
    
  2. Ensure that you can call the proxy IP:

     curl -i $PROXY_IP
    

    The results should look like this:

     HTTP/1.1 404 Not Found
     Content-Type: application/json; charset=utf-8
     Connection: keep-alive
     Content-Length: 48
     X-Kong-Response-Latency: 0
     Server: kong/3.0.0
      
     {"message":"no Route matched with those values"}
    

Deploy an echo service

To proxy requests, you need an upstream application to send a request to. Deploying this echo server provides a simple application that returns information about the Pod it’s running in:

kubectl apply -f https://docs.konghq.com/assets/kubernetes-ingress-controller/examples/echo-service.yaml

The results should look like this:

service/echo created
deployment.apps/echo created

Add routing configuration

Create routing configuration to proxy /echo requests to the echo server:

Gateway API
Ingress
echo "
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: echo
  annotations:
    konghq.com/strip-path: 'true'
spec:
  parentRefs:
  - name: kong
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /echo
    backendRefs:
    - name: echo
      kind: Service
      port: 1027
" | kubectl apply -f -
echo "
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo
  annotations:
    konghq.com/strip-path: 'true'
spec:
  ingressClassName: kong
  rules:
  - http:
      paths:
      - path: /echo
        pathType: ImplementationSpecific
        backend:
          service:
            name: echo
            port:
              number: 1027
" | kubectl apply -f -

The results should look like this:

Gateway API
Ingress
httproute.gateway.networking.k8s.io/echo created
ingress.networking.k8s.io/echo created

Test the routing rule:

curl -i $PROXY_IP/echo

The results should look like this:

HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Content-Length: 140
Connection: keep-alive
Date: Fri, 21 Apr 2023 12:24:55 GMT
X-Kong-Upstream-Latency: 0
X-Kong-Proxy-Latency: 1
Via: kong/3.2.2

Welcome, you are connected to node docker-desktop.
Running on Pod echo-7f87468b8c-tzzv6.
In namespace default.
With IP address 10.1.0.237.
...

If everything is deployed correctly, you should see the above response. This verifies that Kong Gateway can correctly route traffic to an application running inside Kubernetes.

Set up rate limiting

  1. Create an instance of the rate-limiting plugin.

     echo "
     apiVersion: configuration.konghq.com/v1
     kind: KongPlugin
     metadata:
       name: rate-limit
       annotations:
         kubernetes.io/ingress.class: kong
     config:
       minute: 5
       policy: local
     plugin: rate-limiting
     " | kubectl apply -f -
    

    The results should look like this:

     kongplugin.configuration.konghq.com/rate-limit created
    
  2. Associate the plugin with the Service.

     kubectl annotate service echo konghq.com/plugins=rate-limit
    

    The results should look like this:

     service/echo annotated
    
  3. Send requests through this Service to rate limiting response headers.

     curl -si $PROXY_IP/echo | grep RateLimit
    

    The results should look like this:

     RateLimit-Limit: 5
     X-RateLimit-Remaining-Minute: 4
     X-RateLimit-Limit-Minute: 5
     RateLimit-Reset: 60
     RateLimit-Remaining: 4
    
  4. Send repeated requests to decrement the remaining limit headers, and block requests after the fifth request.

     for i in `seq 6`; do curl -sv $PROXY_IP/echo 2>&1 | grep "< HTTP"; done
    

    The results should look like this:

     < HTTP/1.1 200 OK
     < HTTP/1.1 200 OK
     < HTTP/1.1 200 OK
     < HTTP/1.1 200 OK
     < HTTP/1.1 200 OK
     < HTTP/1.1 429 Too Many Requests
    

Scale to multiple pods

  1. Scale your Deployment to three replicas, to test with multiple proxy instances.

     kubectl scale --replicas 3 -n kong deployment kong-gateway
    

    The results should look like this:

     deployment.apps/kong-gateway scaled
    
  2. Check if the status of all the Pods that are READY is Running using the command kubectl get pods -n kong.

  3. Sending requests to this Service does not reliably decrement the remaining counter.

     for i in `seq 10`; do curl -sv $PROXY_IP/echo 2>&1 | grep "X-RateLimit-Remaining-Minute"; done
    

    The results should look like this:

     < X-RateLimit-Remaining-Minute: 4
     < X-RateLimit-Remaining-Minute: 4
     < X-RateLimit-Remaining-Minute: 3
     < X-RateLimit-Remaining-Minute: 4
     < X-RateLimit-Remaining-Minute: 3
     < X-RateLimit-Remaining-Minute: 2
     < X-RateLimit-Remaining-Minute: 3
     < X-RateLimit-Remaining-Minute: 2
     < X-RateLimit-Remaining-Minute: 1
     < X-RateLimit-Remaining-Minute: 1
    

    The policy: local setting in the plugin configuration tracks request counters in each Pod’s local memory separately. Counters are not synchronized across Pods, so clients can send requests past the limit without being throttled if they route through different Pods.

    Using a load balancer that distributes client requests to the same Pod can alleviate this somewhat, but changes to the number of replicas can still disrupt accurate accounting. To consistently enforce the limit, the plugin needs to use a shared set of counters across all Pods. The redis policy can do this when a Redis instance is available.

Deploy Redis to your Kubernetes cluster

Redis provides an external database for Kong components to store shared data, such as rate limiting counters. There are several options to install it:

Bitnami provides a Helm chart for Redis with turnkey options for authentication.

  1. Create a password Secret and replace PASSWORD with a password of your choice.

    kubectl create -n kong secret generic redis-password-secret --from-literal=redis-password=PASSWORD
    

    The results should look like this:

    secret/redis-password-secret created
    
  2. Install Redis

     helm install -n kong redis oci://registry-1.docker.io/bitnamicharts/redis \
       --set auth.existingSecret=redis-password-secret \
       --set architecture=standalone
    

    Helm displays the instructions that describes the new installation.

  3. Update your plugin configuration with the redis policy, Service, and credentials. Replace PASSWORD with the password that you set for Redis.

     kubectl patch kongplugin rate-limit --type json --patch '[
       {
         "op":"replace",
         "path":"/config/policy",
         "value":"redis"
       },
       {
         "op":"add",
         "path":"/config/redis_host",
         "value":"redis-master"
       },
       {
         "op":"add",
         "path":"/config/redis_password",
         "value":"PASSWORD"
       }
     ]'
    

    The results should look like this:

     kongplugin.configuration.konghq.com/rate-limit patched
    

    If the redis_username is not set , it uses the default redis user.

Test rate limiting is a multi-node Kong deployment

Send requests to the Service with rate limiting response headers.

for i in `seq 10`; do curl -sv $PROXY_IP/echo 2>&1 | grep "X-RateLimit-Remaining-Minute"; done

The results should look like this:

< X-RateLimit-Remaining-Minute: 4
< X-RateLimit-Remaining-Minute: 3
< X-RateLimit-Remaining-Minute: 2
< X-RateLimit-Remaining-Minute: 1
< X-RateLimit-Remaining-Minute: 0
< X-RateLimit-Remaining-Minute: 0
< X-RateLimit-Remaining-Minute: 0
< X-RateLimit-Remaining-Minute: 0
< X-RateLimit-Remaining-Minute: 0
< X-RateLimit-Remaining-Minute: 0

The counters decrement sequentially regardless of the Kong Gateway replica count.

(Optional) Use Secrets Management
Available with Kong Gateway Enterprise subscription - Contact Sales

Secrets Management is a Kong Gateway Enterprise feature for storing sensitive plugin configuration separately from the visible plugin configuration. The rate-limiting plugin supports Secrets Management for its redis_username and redis_password fields.

Secrets Management supports several backend systems. This guide uses the environment variable backend, which requires minimal configuration and integrates well with Kubernetes’ standard Secret-sourced environment variables.

Add environment variable from Secret

Update your proxy Deployment with an environment variable sourced from the redis-password-secret Secret.

kubectl patch deploy -n kong kong-gateway --patch '
{
  "spec": {
    "template": {
      "spec": {
        "containers": [
          {
            "name": "proxy",
            "env": [
              {
                "name": "SECRET_REDIS_PASSWORD",
                "valueFrom": {
                  "secretKeyRef": {
                    "name": "redis-password-secret",
                    "key": "redis-password"
                  }
                }
              }
            ]
          }
        ]
      }
    }
  }
}'

The results should look like this:

deployment.apps/kong-gateway patched

Update the plugin to use a reference

After the vault has an entry, you can use a special {vault://VAULT-TYPE/VAULT-KEY} value in plugin configuration instead of a literal value. Patch the rate-limit KongPlugin to change the redis_password value to a vault reference.

kubectl patch kongplugin rate-limit --type json --patch '[
  {
    "op":"replace",
    "path":"/config/redis_password",
    "value":"{vault://env/secret-redis-password}"
  }
]'

Check plugin configuration

The updated KongPlugin propagates the reference to the proxy configuration. You can confirm it by checking the admin API.

In one terminal, open a port-forward to the admin API:

kubectl port-forward deploy/kong-gateway -n kong 8444:8444

The results should look like this:

Forwarding from 127.0.0.1:8444 -> 8444

In a separate terminal, query the /plugins endpoint and filter out the rate-limiting plugin:

curl -ks https://localhost:8444/plugins/ | jq '.data[] | select(.name="rate-limiting") | .config.redis_password'

The results should look like this:

"{vault://env/secret-redis-password}"
Thank you for your feedback.
Was this page useful?
Too much on your plate? close cta icon
More features, less infrastructure with Kong Konnect. 1M requests per month for free.
Try it for Free
  • Kong
    Powering the API world

    Increase developer productivity, security, and performance at scale with the unified platform for API management, service mesh, and ingress controller.

    • Products
      • Kong Konnect
      • Kong Gateway Enterprise
      • Kong Gateway
      • Kong Mesh
      • Kong Ingress Controller
      • Kong Insomnia
      • Product Updates
      • Get Started
    • Documentation
      • Kong Konnect Docs
      • Kong Gateway Docs
      • Kong Mesh Docs
      • Kong Insomnia Docs
      • Kong Konnect Plugin Hub
    • Open Source
      • Kong Gateway
      • Kuma
      • Insomnia
      • Kong Community
    • Company
      • About Kong
      • Customers
      • Careers
      • Press
      • Events
      • Contact
  • Terms• Privacy• Trust and Compliance
© Kong Inc. 2025