Skip to content
Kong Docs are moving soon! Our docs are migrating to a new home. You'll be automatically redirected to the new site in the future. In the meantime, check out the new site now!
Kong Logo | Kong Docs Logo
  • Docs
    • Explore the API Specs
      View all API Specs View all API Specs View all API Specs arrow image
    • Documentation
      API Specs
      Kong Gateway
      Lightweight, fast, and flexible cloud-native API gateway
      Kong Konnect
      Single platform for SaaS end-to-end connectivity
      Kong AI Gateway
      Multi-LLM AI Gateway for GenAI infrastructure
      Kong Mesh
      Enterprise service mesh based on Kuma and Envoy
      decK
      Helps manage Kong’s configuration in a declarative fashion
      Kong Ingress Controller
      Works inside a Kubernetes cluster and configures Kong to proxy traffic
      Kong Gateway Operator
      Manage your Kong deployments on Kubernetes using YAML Manifests
      Insomnia
      Collaborative API development platform
  • Plugin Hub
    • Explore the Plugin Hub
      View all plugins View all plugins View all plugins arrow image
    • Functionality View all View all arrow image
      View all plugins
      AI's icon
      AI
      Govern, secure, and control AI traffic with multi-LLM AI Gateway plugins
      Authentication's icon
      Authentication
      Protect your services with an authentication layer
      Security's icon
      Security
      Protect your services with additional security layer
      Traffic Control's icon
      Traffic Control
      Manage, throttle and restrict inbound and outbound API traffic
      Serverless's icon
      Serverless
      Invoke serverless functions in combination with other plugins
      Analytics & Monitoring's icon
      Analytics & Monitoring
      Visualize, inspect and monitor APIs and microservices traffic
      Transformations's icon
      Transformations
      Transform request and responses on the fly on Kong
      Logging's icon
      Logging
      Log request and response data using the best transport for your infrastructure
  • Support
  • Community
  • Kong Academy
Get a Demo Start Free Trial
Kong Ingress Controller
2.10.x
  • Home icon
  • Kong Ingress Controller
  • Guides
  • Setting up Active and Passive health checks
github-edit-pageEdit this page
report-issueReport an issue
  • Kong Gateway
  • Kong Konnect
  • Kong Mesh
  • Kong AI Gateway
  • Plugin Hub
  • decK
  • Kong Ingress Controller
  • Kong Gateway Operator
  • Insomnia
  • Kuma

  • Docs contribution guidelines
  • unreleased
  • 3.4.x (latest) (LTS)
  • 3.3.x
  • 3.2.x
  • 3.1.x
  • 3.0.x
  • 2.12.x (LTS)
  • 2.11.x
  • 2.10.x
  • 2.9.x
  • 2.8.x
  • 2.7.x
  • 2.6.x
  • 2.5.x (LTS)
  • Introduction
    • FAQ
    • Version Support Policy
    • Stages of Software Availability
    • Changelog
  • Concepts
    • Architecture
    • Custom Resources
    • Deployment Methods
    • Kong for Kubernetes with Kong Gateway Enterprise
    • High-Availability and Scaling
    • Resource Classes
    • Security
    • Ingress Resource API Versions
    • Gateway API
    • Expression Based Router
  • Deployment
    • Kong Ingress on Minikube
    • Kong for Kubernetes
    • Kong Enterprise for Kubernetes (DB-less)
    • Kong Enterprise for Kubernetes (DB-backed)
    • Kong Ingress on AKS
    • Kong Ingress on EKS
    • Kong Ingress on GKE
    • Admission Webhook
    • Installing Gateway APIs
  • Guides
    • Getting Started with KIC
    • Upgrading from previous versions
    • Upgrading to Kong 3.x
    • Using Kong Gateway Enterprise
    • Getting Started using Istio
    • Using Custom Resources
      • Using the KongPlugin Resource
      • Using the KongIngress Resource
      • Using KongConsumer and KongCredential Resources
      • Using the TCPIngress Resource
      • Using the UDPIngress Resource
    • Using the ACL and JWT Plugins
    • Using cert-manager with Kong
    • Allowing Multiple Authentication Methods
    • Configuring a Fallback Service
    • Using an External Service
    • Configuring HTTPS Redirects for Services
    • Using Redis for Rate Limiting
    • Integrate KIC with Prometheus/Grafana
    • Configuring Circuit-Breaker and Health-Checking
    • Setting up a Custom Plugin
    • Setting up Upstream mTLS
    • Exposing a TCP/UDP/gRPC Service
      • Exposing a TCP Service
      • Exposing a UDP Service
      • Exposing a gRPC service
    • Using the mTLS Auth Plugin
    • Using the OpenID Connect Plugin
    • Rewriting Hosts and Paths
    • Preserving Client IP Address
    • Using Kong with Knative
    • Using Multiple Backend Services
    • Using Gateway Discovery
    • Routing by Header
  • References
    • KIC Annotations
    • CLI Arguments
    • Custom Resource Definitions
    • Plugin Compatibility
    • Version Compatibility
    • Supported Kong Router Flavors
    • Troubleshooting
    • Kubernetes Events
    • Prometheus Metrics
    • Feature Gates
    • Supported Gateway API Features
enterprise-switcher-icon Switch to OSS
On this pageOn this page
  • Prerequisites
    • Install Kong
    • Test connectivity to Kong
  • Create a Kubernetes service
  • Setup Ingress rules
  • Setup passive health checking
  • Setup active health checking
You are browsing documentation for an older version. See the latest documentation here.

Setting up Active and Passive health checks

Learn to setup active and passive health checking using the Kong Ingress Controller. This configuration allows Kong to automatically short-circuit requests to specific Pods that are mis-behaving in your Kubernetes Cluster.

Prerequisites: Install Kong Ingress Controller in your Kubernetes cluster and connect to Kong.

Prerequisites

Install Kong

You can install Kong in your Kubernetes cluster using Helm.

  1. Add the Kong Helm charts:

     helm repo add kong https://charts.konghq.com
     helm repo update
    
  2. Install Kong Ingress Controller and Kong Gateway with Helm:

     helm install kong kong/ingress -n kong --create-namespace 
    

Test connectivity to Kong

Kubernetes exposes the proxy through a Kubernetes service. Run the following commands to store the load balancer IP address in a variable named PROXY_IP:

  1. Populate $PROXY_IP for future commands:

     export PROXY_IP=$(kubectl get svc --namespace kong kong-gateway-proxy -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
     echo $PROXY_IP
    
  2. Ensure that you can call the proxy IP:

     curl -i $PROXY_IP
    

    The results should look like this:

     HTTP/1.1 404 Not Found
     Content-Type: application/json; charset=utf-8
     Connection: keep-alive
     Content-Length: 48
     X-Kong-Response-Latency: 0
     Server: kong/3.0.0
      
     {"message":"no Route matched with those values"}
    

Create a Kubernetes service

Create a Kubernetes service setup a httpbin service in the cluster and proxy it.

$ kubectl apply -f https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/v3.4.4/deploy/manifests/httpbin.yaml

The results should look like this:

service/httpbin created
deployment.apps/httpbin created

Setup Ingress rules

  1. Expose the service outside the Kubernetes cluster by defining Ingress rules.

     echo '
     apiVersion: networking.k8s.io/v1
     kind: Ingress
     metadata:
       name: demo
       annotations:
         konghq.com/strip-path: "true"
     spec:
       ingressClassName: kong
       rules:
       - http:
           paths:
           - path: /test
             pathType: Prefix
             backend:
               service:
                 name: httpbin
                 port:
                   number: 80
     ' | kubectl apply -f -
    

    The results should look like this:

     ingress.networking.k8s.io/demo created
    
  2. Test these endpoints.

     curl -i $PROXY_IP/test/status/200
    

    The results should look like this:

     HTTP/1.1 200 OK
     Content-Type: text/html; charset=utf-8
     Content-Length: 0
     Connection: keep-alive
     Server: gunicorn/19.9.0
     Access-Control-Allow-Origin: *
     Access-Control-Allow-Credentials: true
     X-Kong-Upstream-Latency: 982
     X-Kong-Proxy-Latency: 2
     Via: kong/3.3.1
    

    Observe that the headers and you can see that Kong has proxied the request correctly.

Setup passive health checking

  1. All health checks are done at Service-level and not Ingress-level. To configure Kong to short-circuit requests to a Pod if it throws 3 consecutive errors, add a KongIngress resource.

    echo "apiVersion: configuration.konghq.com/v1
    kind: KongIngress
    metadata:
        name: demo-health-checking
    upstream:
      healthchecks:
        passive:
          healthy:
            successes: 3
          unhealthy:
            http_failures: 3" | kubectl apply -f -
    

    The results should look like this:

    kongingress.configuration.konghq.com/demo-health-checking created
    
  2. Associate the KongIngress resource with httpbin service.

     $ kubectl patch svc httpbin -p '{"metadata":{"annotations":{"konghq.com/override":"demo-health-checking"}}}'
    

    The results should look like this:

     service/httpbin patched
    
  3. Test the Ingress rule by sending 2 requests that represent a failure from upstream and then send a request for 200. The requests with /status/500 simulate a failure from upstream. Send two requests with staus/500.
     $ curl -i $PROXY_IP/test/status/500
     $ curl -i $PROXY_IP/test/status/500
    

    The results should look like this:

     HTTP/1.1 500 INTERNAL SERVER ERROR
     Content-Type: text/html; charset=utf-8
     Content-Length: 0
     Connection: keep-alive
     Server: gunicorn/19.9.0
     Access-Control-Allow-Origin: *
     Access-Control-Allow-Credentials: true
     X-Kong-Upstream-Latency: 2
     X-Kong-Proxy-Latency: 0
     Via: kong/3.1.1
    

    Send the third request with staus/200

     $ curl -i $PROXY_IP/test/status/200
    

    The results should look like this:

     HTTP/1.1 200 OK
     Content-Type: text/html; charset=utf-8
     Content-Length: 0
     Connection: keep-alive
     Server: gunicorn/19.9.0
     Access-Control-Allow-Origin: *
     Access-Control-Allow-Credentials: true
     X-Kong-Upstream-Latency: 2
     X-Kong-Proxy-Latency: 0
     Via: kong/3.1.1
    

    Kong has not short-circuited because there were only two failures.

  4. Send 3 requests to open the circuit, and then send a normal request. Send two requests with staus/500.
     $ curl -i $PROXY_IP/test/status/500
     $ curl -i $PROXY_IP/test/status/500
     $ curl -i $PROXY_IP/test/status/500
    

    The results should look like this:

     HTTP/1.1 500 INTERNAL SERVER ERROR
     Content-Type: text/html; charset=utf-8
     Content-Length: 0
     Connection: keep-alive
     Server: gunicorn/19.9.0
     Access-Control-Allow-Origin: *
     Access-Control-Allow-Credentials: true
     X-Kong-Upstream-Latency: 2
     X-Kong-Proxy-Latency: 0
     Via: kong/3.1.1
    

    Send the fourth request with staus/200

     $ curl -i $PROXY_IP/test/status/200
    

    The results should look like this:

     HTTP/1.1 503 Service Temporarily Unavailable
     Content-Type: application/json; charset=utf-8
     Connection: keep-alive
     Content-Length: 62
     X-Kong-Response-Latency: 1
     Server: kong/3.3.1
        
     {
       "message":"failure to get a peer from the ring-balancer"
     }%                  
    

    Kong returns a 503, indicating that the service is unavailable. Because there is only one Pod of httpbin service running in the cluster, and that is throwing errors, Kong does not proxy anymore requests.

There are a few options:

  • Delete the current httpbin Pod; Kong then sends proxy requests to the new Pod that comes in its place.
  • Scale the httpbin deployment; Kong then sends proxy requests to the new Pods and leave the short-circuited Pod out of the loop.
  • Manually change the Pod health status in Kong using Kong’s Admin API.

These options highlight the fact that once a circuit is opened because of errors, there is no way for Kong to close the circuit again.

This is a feature which some services might need, after a Pod starts throwing errors, manual intervention is necessary so that a Pod can again handle requests. To get around this, you can use active health-check, where each instance of Kong actively probes Pods to check if they are healthy.

Setup active health checking

  1. Update the KongIngress resource to use active health-checks.
    echo "apiVersion: configuration.konghq.com/v1
    kind: KongIngress
    metadata:
        name: demo-health-checking
    upstream:
      healthchecks:
        active:
          healthy:
            interval: 5
            successes: 3
          http_path: /status/200
          type: http
          unhealthy:
            http_failures: 1
            interval: 5
        passive:
          healthy:
            successes: 3
          unhealthy:
            http_failures: 3" | kubectl apply -f -
    

    The results should look like this:

    kongingress.configuration.konghq.com/demo-health-checking created
    

    This configures Kong to actively probe /status/200 every 5 seconds. If a Pod is unhealthy from Kong’s perspective, 3 successful probes changes the status of the Pod to healthy and Kong again starts to forward requests to that Pod. Wait 15 seconds for the pod to be marked as healthy before continuing.

  2. Test the Ingress rule.
     $ curl -i $PROXY_IP/test/status/200
    

    The results should look like this:

     HTTP/1.1 200 OK
     Content-Type: text/html; charset=utf-8
     Content-Length: 0
     Connection: keep-alive
     Server: gunicorn/19.9.0
     Access-Control-Allow-Origin: *
     Access-Control-Allow-Credentials: true
     X-Kong-Upstream-Latency: 2
     X-Kong-Proxy-Latency: 0
     Via: kong/3.1.1
    
  3. Trip the circuit again by sending three requests that returns the status 500 from httpbin.
    $ curl -i $PROXY_IP/test/status/500
    $ curl -i $PROXY_IP/test/status/500
    $ curl -i $PROXY_IP/test/status/500
    

    When you send the requests, it fails for about 15 seconds, the duration for active health checks to re-classify the httpbin Pod as healthy again.

    $ curl -i $PROXY_IP/test/status/200
    

    The results should look like this:

    HTTP/1.1 503 Service Temporarily Unavailable
    Content-Type: application/json; charset=utf-8
    Connection: keep-alive
    Content-Length: 62
    X-Kong-Response-Latency: 1
    Server: kong/3.3.1
        
    {
      "message":"failure to get a peer from the ring-balancer"
    }%                  
    
  4. Send the requests after 15 seconds or so.

    $ curl -i $PROXY_IP/test/status/200
    

    The results should look like this:

    HTTP/1.1 200 OK
    Content-Type: text/html; charset=utf-8
    Content-Length: 0
    Connection: keep-alive
    Server: gunicorn/19.9.0
    Access-Control-Allow-Origin: *
    Access-Control-Allow-Credentials: true
    X-Kong-Upstream-Latency: 2
    X-Kong-Proxy-Latency: 0
    Via: kong/3.1.1
    

Read more about health-checks and circuit breaker in Kong’s documentation.

Thank you for your feedback.
Was this page useful?
Too much on your plate? close cta icon
More features, less infrastructure with Kong Konnect. 1M requests per month for free.
Try it for Free
  • Kong
    Powering the API world

    Increase developer productivity, security, and performance at scale with the unified platform for API management, service mesh, and ingress controller.

    • Products
      • Kong Konnect
      • Kong Gateway Enterprise
      • Kong Gateway
      • Kong Mesh
      • Kong Ingress Controller
      • Kong Insomnia
      • Product Updates
      • Get Started
    • Documentation
      • Kong Konnect Docs
      • Kong Gateway Docs
      • Kong Mesh Docs
      • Kong Insomnia Docs
      • Kong Konnect Plugin Hub
    • Open Source
      • Kong Gateway
      • Kuma
      • Insomnia
      • Kong Community
    • Company
      • About Kong
      • Customers
      • Careers
      • Press
      • Events
      • Contact
  • Terms• Privacy• Trust and Compliance
© Kong Inc. 2025