You are browsing documentation for an older version. See the latest documentation here.
Horizontally autoscale workloads
Kong Gateway Operator can scrape Kong Gateway and enrich it with Kubernetes metadata so that it can be used by users to autoscale their workloads.
Before you begin, ensure that you have installed the Kong Gateway Operator in your Kubernetes cluster. This guide requires an enterprise license.
Before you begin, ensure that you have installed the Kong Gateway Operator in your Kubernetes cluster. This guide requires an enterprise license.
Prerequisites
Install Kong Gateway Operator
Update the Helm repository:
helm repo add kong https://charts.konghq.com
helm repo update kong
Install Kong Gateway Operator with Helm:
helm upgrade --install kgo kong/gateway-operator -n kong-system --create-namespace --set image.tag=1.3
You can wait for the operator to be ready using kubectl wait
:
kubectl -n kong-system wait --for=condition=Available=true --timeout=120s deployment/kgo-gateway-operator-controller-manager
Enterprise License
Note: This is an enterprise feature. In order to use it you’ll need a license installed in your cluster so that Kong Gateway Operator can consume it.
echo "
apiVersion: configuration.konghq.com/v1alpha1
kind: KongLicense
metadata:
name: kong-license
rawLicenseString: '$(cat ./license.json)'
" | kubectl apply -f -
Overview
Kong Gateway provides extensive metrics through it’s Prometheus plugin. However, these metrics are labelled with Kong entities such as Service
and Route
rather than Kubernetes resources.
Kong Gateway Operator provides DataPlaneMetricsExtension
, which scrapes the Kong metrics and enriches them with Kubernetes labels before exposing them on it’s own /metrics
endpoint.
These enriched metrics can be used with the Kubernetes HorizontalPodAutoscaler
to autoscale workloads.
How it works
Attaching a DataPlaneMetricsExtension
resource to a ControlPlane
will:
- Create a managed Prometheus
KongPlugin
instance with the configuration defined inMetricsConfig
- Append the managed plugin to the selected
Service
s (throughDataPlaneMetricsExtension
’sserviceSelector
field)konghq.com/plugins
annotation - Scrape Kong Gateway’s metrics and enrich them with Kubernetes metadata
- Expose those metrics on Kong Gateway Operator’s
/metrics
endpoint
Example
This example deploys an echo
Service
which will have its latency measured and exposed on Kong Gateway Operator’s /metrics
endpoint. The service allows us to run any shell command, which we’ll use to add artificial latency later for testing purposes.
echo '
apiVersion: v1
kind: Service
metadata:
name: echo
namespace: default
spec:
ports:
- protocol: TCP
name: http
port: 80
targetPort: http
selector:
app: echo
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: echo
name: echo
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: registry.k8s.io/e2e-test-images/agnhost:2.40
command:
- /agnhost
- netexec
- --http-port=8080
ports:
- containerPort: 8080
name: http
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP' | kubectl apply -f -
Next, create a DataPlaneMetricsExtension
that points to the echo
service, attach it to a GatewayConfiguration
resource and deploy a Gateway
with a HTTPRoute
so that we can make a HTTP request to the service.
echo '
kind: DataPlaneMetricsExtension
apiVersion: gateway-operator.konghq.com/v1alpha1
metadata:
name: kong
namespace: default
spec:
serviceSelector:
matchNames:
- name: echo
config:
latency: true
---
kind: GatewayConfiguration
apiVersion: gateway-operator.konghq.com/v1beta1
metadata:
name: kong
namespace: default
spec:
dataPlaneOptions:
deployment:
replicas: 1
podTemplateSpec:
spec:
containers:
- name: proxy
image: kong/kong-gateway:3.8.1.0
readinessProbe:
initialDelaySeconds: 1
periodSeconds: 1
controlPlaneOptions:
deployment:
podTemplateSpec:
spec:
containers:
- name: controller
readinessProbe:
initialDelaySeconds: 1
periodSeconds: 1
extensions:
- kind: DataPlaneMetricsExtension
group: gateway-operator.konghq.com
name: kong
---
kind: GatewayClass
apiVersion: gateway.networking.k8s.io/v1
metadata:
name: kong
spec:
controllerName: konghq.com/gateway-operator
parametersRef:
group: gateway-operator.konghq.com
kind: GatewayConfiguration
name: kong
namespace: default
---
kind: Gateway
apiVersion: gateway.networking.k8s.io/v1
metadata:
name: kong
namespace: default
spec:
gatewayClassName: kong
listeners:
- name: http
protocol: HTTP
port: 80
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: httproute-echo
namespace: default
annotations:
konghq.com/strip-path: "true"
spec:
parentRefs:
- name: kong
rules:
- matches:
- path:
type: PathPrefix
value: /echo
backendRefs:
- name: echo
kind: Service
port: 80 ' | kubectl apply -f -
Metrics support for enrichment
- upstream latency enabled via
latency
configuration optionkong_upstream_latency_ms
Custom Metrics providers support
Metrics exposed by Kong Gateway Operator can be integrated with a variety of monitoring systems.
Nevertheless you can follow our guides to integrate Kong Gateway Operator with:
Limitations
Multi backend Kong services
Kong Gateway Operator is not able to provide accurate measurements for multi backend Kong services e.g. HTTPRoute
s that have more than 1 backendRef
:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: httproute-testing
spec:
parentRefs:
- name: kong
rules:
- matches:
- path:
type: PathPrefix
value: /httproute-testing
backendRefs:
- name: httpbin
kind: Service
port: 80
weight: 75
- name: nginx
kind: Service
port: 8080
weight: 25