Rate limiting with Kong Ingress Controller

Related Documentation
TL;DR

Create a rate-limiting KongPlugin instance and annotate your Service with the konghq.com/plugins annotation.

Prerequisites

If you don’t have a Konnect account, you can get started quickly with our onboarding wizard.

  1. The following Konnect items are required to complete this tutorial:
    • Personal access token (PAT): Create a new personal access token by opening the Konnect PAT page and selecting Generate Token.
  2. Set the personal access token as an environment variable:

    export KONNECT_TOKEN='YOUR KONNECT TOKEN'
    
    Copied to clipboard!

Create a Rate Limiting Plugin

To add rate limiting to the echo Service, create a new Rate Limiting KongPlugin:

echo "
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: rate-limit-5-min
  namespace: kong
  annotations:
    kubernetes.io/ingress.class: kong
plugin: rate-limiting
config:
  minute: 5
  policy: local
" | kubectl apply -f -
Copied to clipboard!

Next, apply the KongPlugin resource by annotating the service resource:

kubectl annotate -n kong service echo konghq.com/plugins=rate-limit-5-min
Copied to clipboard!

Validate your configuration

Send repeated requests to decrement the remaining limit headers, and block requests after the fifth request:

for _ in {1..6}; do
  curl  -i $PROXY_IP/echo  
  echo
done
Copied to clipboard!


The RateLimit-Remaining header indicates how many requests are remaining before the rate limit is enforced. The first five responses return HTTP/1.1 200 OK which indicates that the request is allowed. The final request returns HTTP/1.1 429 Too Many Requests and the request is blocked.

If you receive an HTTP 429 from the first request, wait 60 seconds for the rate limit timer to reset.


Scale to multiple pods

The policy: local setting in the plugin configuration tracks request counters in each Pod’s local memory separately. Counters are not synchronized across Pods, so clients can send requests past the limit without being throttled if they route through different Pods.

To test this, scale your Deployment to three replicas:

kubectl scale --replicas 3 -n kong deployment kong-gateway
Copied to clipboard!

It may take up to 30 seconds for the new replicas to come online. Run kubectl get pods -n kong and check the Ready column to validate that the replicas are online

Sending requests to this Service does not reliably decrement the remaining counter:

for _ in {1..10}; do
  curl  -sv $PROXY_IP/echo  2>&1 | grep -E "(< RateLimit-Remaining)"
  echo
done
Copied to clipboard!


The requests are distributed across multiple Pods, each with their own in memory rate limiting counter:


< RateLimit-Remaining: 4

< RateLimit-Remaining: 4

< RateLimit-Remaining: 3

< RateLimit-Remaining: 4

< RateLimit-Remaining: 3

< RateLimit-Remaining: 2

< RateLimit-Remaining: 3

< RateLimit-Remaining: 2

< RateLimit-Remaining: 1

< RateLimit-Remaining: 1

Using a load balancer that distributes client requests to the same Pod can alleviate this somewhat, but changes to the number of replicas can still disrupt accurate accounting. To consistently enforce the limit, the plugin needs to use a shared set of counters across all Pods. The redis policy can do this when a Redis instance is available.

Deploy Redis to your Kubernetes cluster

Redis provides an external database for Kong Gateway components to store shared data, such as rate limiting counters. There are several options to install it.

Bitnami provides a Helm chart for Redis with turnkey options for authentication.

  1. Create a password Secret and replace PASSWORD with a password of your choice.

    kubectl create -n kong secret generic redis-password-secret --from-literal=redis-password=PASSWORD
    
    Copied to clipboard!
  2. Install Redis

     helm install -n kong redis oci://registry-1.docker.io/bitnamicharts/redis \
       --set auth.existingSecret=redis-password-secret \
       --set architecture=standalone
    
    Copied to clipboard!

    Helm displays the instructions that describes the new installation.

    If Redis is not accessible, Kong Gateway will allow incoming requests. Run kubectl get pods -n kong redis-master-0 and check the Ready column to ensure that Redis is ready before continuing.

  3. Update your plugin configuration with the redis policy, Service, and credentials. Replace PASSWORD with the password that you set for Redis.

     kubectl patch -n kong kongplugin rate-limit-5-min --type json --patch '[
       {
         "op":"replace",
         "path":"/config/policy",
         "value":"redis"
       },
       {
         "op":"add",
         "path":"/config/redis_host",
         "value":"redis-master"
       },
       {
         "op":"add",
         "path":"/config/redis_password",
         "value":"PASSWORD"
       }
     ]'
    
    Copied to clipboard!

    If the redis_username is not set, it uses the default redis user.

Test rate limiting in a multi-node deployment

Send the following request to test the rate limiting functionality in the multi-Pod deployment:

for _ in {1..6}; do
  curl  -sv $PROXY_IP/echo  2>&1 | grep -E "(< RateLimit-Remaining|< HTTP)"
  echo
done
Copied to clipboard!


The counters decrement sequentially regardless of the Kong Gateway replica count.


< HTTP/1.1 200 OK
< RateLimit-Remaining: 4

< HTTP/1.1 200 OK
< RateLimit-Remaining: 3

< HTTP/1.1 200 OK
< RateLimit-Remaining: 2

< HTTP/1.1 200 OK
< RateLimit-Remaining: 1

< HTTP/1.1 200 OK
< RateLimit-Remaining: 0

< HTTP/1.1 429 Too Many Requests
< RateLimit-Remaining: 0

Cleanup

kubectl delete -n kong -f https://developer.konghq.com/manifests/kic/echo-service.yaml
Copied to clipboard!

Did this doc help?

Something wrong?

Help us make these docs great!

Kong Developer docs are open source. If you find these useful and want to make them better, contribute today!