You are browsing unreleased documentation. See the latest documentation here.
Health Checks
Setup active and passive health checking using the Kong Ingress Controller. This configuration allows Kong to automatically short-circuit requests to specific Pods that are mis-behaving in your Kubernetes Cluster.
When you setup a passive health check for a service that runs in a cluster and if the Pod that runs the service reports an error, Kong returns a 503, indicating that the service is unavailable. Kong does not proxy anymore requests. There are a few options that you can consider:
- Delete the current Pod; Kong then sends proxy requests to the new Pod that comes in its place.
- Scale the deployment; Kong then sends proxy requests to the new Pods and leave the short-circuited Pod out of the loop.
- Manually change the Pod health status in Kong using Kong’s Admin API.
These options highlight the fact that once a circuit is opened because of errors, there is no way for Kong to close the circuit again. Manual intervention is necessary so that a Pod can again handle requests. You can use active health-check, where each instance of Kong actively probes Pods to check if they are healthy.
Before you begin ensure that you have Installed Kong Ingress Controller with Gateway API support in your Kubernetes cluster and are able to connect to Kong.
Before you begin ensure that you have Installed Kong Ingress Controller with Gateway API support in your Kubernetes cluster and are able to connect to Kong.
Prerequisites
Install the Gateway APIs
-
Install the Gateway API CRDs before installing Kong Ingress Controller.
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.1.0/standard-install.yaml
-
Create a
Gateway
andGatewayClass
instance to use.echo " --- apiVersion: gateway.networking.k8s.io/v1 kind: GatewayClass metadata: name: kong annotations: konghq.com/gatewayclass-unmanaged: 'true' spec: controllerName: konghq.com/kic-gateway-controller --- apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: kong spec: gatewayClassName: kong listeners: - name: proxy port: 80 protocol: HTTP " | kubectl apply -f -
The results should look like this:
gatewayclass.gateway.networking.k8s.io/kong created gateway.gateway.networking.k8s.io/kong created
Install Kong
You can install Kong in your Kubernetes cluster using Helm.
-
Add the Kong Helm charts:
helm repo add kong https://charts.konghq.com helm repo update
-
Install Kong Ingress Controller and Kong Gateway with Helm:
helm install kong kong/ingress -n kong --create-namespace
Test connectivity to Kong
Kubernetes exposes the proxy through a Kubernetes service. Run the following commands to store the load balancer IP address in a variable named PROXY_IP
:
-
Populate
$PROXY_IP
for future commands:export PROXY_IP=$(kubectl get svc --namespace kong kong-gateway-proxy -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo $PROXY_IP
-
Ensure that you can call the proxy IP:
curl -i $PROXY_IP
The results should look like this:
HTTP/1.1 404 Not Found Content-Type: application/json; charset=utf-8 Connection: keep-alive Content-Length: 48 X-Kong-Response-Latency: 0 Server: kong/3.0.0 {"message":"no Route matched with those values"}
Deploy an upstream HTTP application
Create a Kubernetes service setup a httpbin service in the cluster and proxy it.
kubectl apply -f https://docs.konghq.com/assets/kubernetes-ingress-controller/examples/httpbin-service.yaml
The results should look like this:
service/httpbin created
deployment.apps/httpbin created
Add routing configuration
-
Expose the service outside the Kubernetes cluster.
The results should look like this:
-
Test these endpoints.
curl -i $PROXY_IP/test/status/200
The results should look like this:
HTTP/1.1 200 OK Content-Type: text/html; charset=utf-8 Content-Length: 0 Connection: keep-alive Server: gunicorn/19.9.0 Access-Control-Allow-Origin: * Access-Control-Allow-Credentials: true X-Kong-Upstream-Latency: 1 X-Kong-Proxy-Latency: 1 Via: kong/3.4.2
Observe the headers and you can see that Kong has proxied the request correctly.
Setup passive health checking
-
All health checks are done at Service-level and not Ingress-level. To configure Kong to short-circuit requests to a Pod if it throws 3 consecutive errors, add a KongUpstreamPolicy resource.
echo ' apiVersion: configuration.konghq.com/v1beta1 kind: KongUpstreamPolicy metadata: name: demo-health-checking spec: healthchecks: passive: healthy: successes: 3 unhealthy: httpFailures: 3 ' | kubectl apply -f -
The results should look like this:
kongupstreampolicy.configuration.konghq.com/demo-health-checking created
-
Associate the KongUpstreamPolicy resource with
httpbin
service.kubectl patch svc httpbin -p '{"metadata":{"annotations":{"konghq.com/upstream-policy":"demo-health-checking"}}}'
The results should look like this:
service/httpbin patched
- Test the Ingress rule by sending 2 requests that represent a failure from upstream and then send a request for 200. The requests with
/status/500
simulate a failure from upstream. Send two requests withstatus/500
.$ curl -i $PROXY_IP/test/status/500 $ curl -i $PROXY_IP/test/status/500
The results should look like this:
HTTP/1.1 500 INTERNAL SERVER ERROR Content-Type: text/html; charset=utf-8 Content-Length: 0 Connection: keep-alive Server: gunicorn/19.9.0 Access-Control-Allow-Origin: * Access-Control-Allow-Credentials: true X-Kong-Upstream-Latency: 1 X-Kong-Proxy-Latency: 0 Via: kong/3.4.2
Send the third request with
status/200
$ curl -i $PROXY_IP/test/status/200
The results should look like this:
HTTP/1.1 200 OK Content-Type: text/html; charset=utf-8 Content-Length: 0 Connection: keep-alive Server: gunicorn/19.9.0 Access-Control-Allow-Origin: * Access-Control-Allow-Credentials: true X-Kong-Upstream-Latency: 1 X-Kong-Proxy-Latency: 0 Via: kong/3.4.2
Kong has not short-circuited because there were only two failures.
- Send 3 requests to open the circuit, and then send a normal request.
Send two requests with
status/500
.$ curl -i $PROXY_IP/test/status/500 $ curl -i $PROXY_IP/test/status/500 $ curl -i $PROXY_IP/test/status/500
The results should look like this:
HTTP/1.1 500 INTERNAL SERVER ERROR Content-Type: text/html; charset=utf-8 Content-Length: 0 Connection: keep-alive Server: gunicorn/19.9.0 Access-Control-Allow-Origin: * Access-Control-Allow-Credentials: true X-Kong-Upstream-Latency: 1 X-Kong-Proxy-Latency: 0 Via: kong/3.4.2
Send the fourth request with
status/200
curl -i $PROXY_IP/test/status/200
The results should look like this:
HTTP/1.1 503 Service Temporarily Unavailable Content-Type: application/json; charset=utf-8 Connection: keep-alive Content-Length: 62 X-Kong-Response-Latency: 0 Server: kong/3.4.2 { "message":"failure to get a peer from the ring-balancer" }%
Kong returns a 503, indicating that the service is unavailable. Because there is only one Pod of
httpbin
service running in the cluster, and that is throwing errors, Kong does not proxy anymore requests. To get around this, you can use active health-check, where each instance of Kong actively probes Pods to check if they are healthy.
Setup active health checking
- Update the KongUpstreamPolicy resource to use active health-checks.
echo ' apiVersion: configuration.konghq.com/v1beta1 kind: KongUpstreamPolicy metadata: name: demo-health-checking spec: healthchecks: active: healthy: interval: 5 successes: 3 httpPath: /status/200 type: http unhealthy: httpFailures: 1 interval: 5 passive: healthy: successes: 3 unhealthy: httpFailures: 3 ' | kubectl apply -f -
The results should look like this:
kongupstreampolicy.configuration.konghq.com/demo-health-checking configured
This configures Kong to actively probe
/status/200
every 5 seconds. If a Pod is unhealthy from Kong’s perspective, 3 successful probes changes the status of the Pod to healthy and Kong again starts to forward requests to that Pod. Wait 15 seconds for the pod to be marked as healthy before continuing. - Test the Ingress rule.
curl -i $PROXY_IP/test/status/200
The results should look like this:
HTTP/1.1 200 OK Content-Type: text/html; charset=utf-8 Content-Length: 0 Connection: keep-alive Server: gunicorn/19.9.0 Access-Control-Allow-Origin: * Access-Control-Allow-Credentials: true X-Kong-Upstream-Latency: 1 X-Kong-Proxy-Latency: 1 Via: kong/3.4.2
- Trip the circuit again by sending three requests that returns the status 500 from httpbin.
$ curl -i $PROXY_IP/test/status/500 $ curl -i $PROXY_IP/test/status/500 $ curl -i $PROXY_IP/test/status/500
When you send the requests, it fails for about 15 seconds, the duration for active health checks to re-classify the httpbin Pod as healthy again.
$ curl -i $PROXY_IP/test/status/200
The results should look like this:
HTTP/1.1 503 Service Temporarily Unavailable Content-Type: application/json; charset=utf-8 Connection: keep-alive Content-Length: 62 X-Kong-Response-Latency: 0 Server: kong/3.4.2 { "message":"failure to get a peer from the ring-balancer" }%
-
Send the requests after 15 seconds or so.
curl -i $PROXY_IP/test/status/200
The results should look like this:
HTTP/1.1 200 OK Content-Type: text/html; charset=utf-8 Content-Length: 0 Connection: keep-alive Server: gunicorn/19.9.0 Access-Control-Allow-Origin: * Access-Control-Allow-Credentials: true X-Kong-Upstream-Latency: 1 X-Kong-Proxy-Latency: 1 Via: kong/3.4.2
Read more about health-checks and circuit breaker in Kong’s documentation.