Skip to content
Kong Logo | Kong Docs Logo
  • Docs
    • Explore the API Specs
      View all API Specs View all API Specs View all API Specs arrow image
    • Documentation
      API Specs
      Kong Gateway
      Lightweight, fast, and flexible cloud-native API gateway
      Kong Konnect
      Single platform for SaaS end-to-end connectivity
      Kong AI Gateway
      Multi-LLM AI Gateway for GenAI infrastructure
      Kong Mesh
      Enterprise service mesh based on Kuma and Envoy
      decK
      Helps manage Kong’s configuration in a declarative fashion
      Kong Ingress Controller
      Works inside a Kubernetes cluster and configures Kong to proxy traffic
      Kong Gateway Operator
      Manage your Kong deployments on Kubernetes using YAML Manifests
      Insomnia
      Collaborative API development platform
  • Plugin Hub
    • Explore the Plugin Hub
      View all plugins View all plugins View all plugins arrow image
    • Functionality View all View all arrow image
      View all plugins
      AI's icon
      AI
      Govern, secure, and control AI traffic with multi-LLM AI Gateway plugins
      Authentication's icon
      Authentication
      Protect your services with an authentication layer
      Security's icon
      Security
      Protect your services with additional security layer
      Traffic Control's icon
      Traffic Control
      Manage, throttle and restrict inbound and outbound API traffic
      Serverless's icon
      Serverless
      Invoke serverless functions in combination with other plugins
      Analytics & Monitoring's icon
      Analytics & Monitoring
      Visualize, inspect and monitor APIs and microservices traffic
      Transformations's icon
      Transformations
      Transform request and responses on the fly on Kong
      Logging's icon
      Logging
      Log request and response data using the best transport for your infrastructure
  • Support
  • Community
  • Kong Academy
Get a Demo Start Free Trial
1.4.x
  • Home icon
  • Kong Gateway Operator
  • Guides
  • Autoscaling Workloads
  • Horizontally autoscale workloads using Datadog
github-edit-pageEdit this page
report-issueReport an issue
  • Kong Gateway
  • Kong Konnect
  • Kong Mesh
  • Kong AI Gateway
  • Plugin Hub
  • decK
  • Kong Ingress Controller
  • Kong Gateway Operator
  • Insomnia
  • Kuma

  • Docs contribution guidelines
  • unreleased
  • 1.6.x (latest)
  • 1.5.x
  • 1.4.x
  • 1.3.x
  • 1.2.x
  • 1.1.x
  • 1.0.x
  • Introduction
    • Overview
    • Deployment Topologies
      • Hybrid Mode
      • DB-less Mode
    • Key Concepts
      • Gateway API
      • Gateway Configuration
      • Managed Gateways
    • Changelog
    • Version Support Policy
    • FAQ
  • Get Started
    • Konnect
      • Install Gateway Operator
      • Deploy a Data Plane
      • Create a Route
    • Kong Ingress Controller
      • Install Gateway Operator
      • Create a Gateway
      • Create a Route
  • Production Deployment
    • Overview
    • Install
    • Enterprise License
    • Monitoring
      • Metrics
      • Status fields
        • Overview
        • DataPlane
        • ControlPlane
        • Gateway
    • Upgrade Gateway Operator
    • Certificates
      • Using custom CA for signing operator certificates
  • Guides
    • AI Gateway
    • Customization
      • Set data plane image
      • Deploying Sidecars
      • Customizing PodTemplateSpec
      • Defining PodDisruptionBudget for DataPlane
    • Autoscaling Kong Gateway
    • Autoscaling Workloads
      • Overview
      • Prometheus
      • Datadog
    • Upgrading Data Planes
      • Rolling Deployment
      • Blue / Green Deployment
    • Kong Custom Plugin Distribution
    • Managing Konnect entities
      • Architecture overview
      • Gateway Control Plane
      • Service and Route
      • Consumer, Credentials and Consumer Groups
      • Key and Key Set
      • Upstream and Targets
      • Certificate and CA Certificate
      • Vault
      • Data Plane Client Certificate
      • Tagging and Labeling
      • Managing Plugin Bindings by CRD
      • FAQ
  • Reference
    • Custom Resources
      • Overview
      • GatewayConfiguration
      • ControlPlane
      • DataPlane
      • KongPluginInstallation
    • Configuration Options
    • License
    • Version Compatibility
enterprise-switcher-icon Switch to OSS
On this pageOn this page
  • Install Datadog in your Kubernetes cluster
    • Datadog API and application keys
    • Installing
  • Send traffic
  • Annotate Kong Gateway Operator with Datadog checks config
  • Expose Datadog metrics to Kubernetes
    • Use DatadogMetric in HorizontalPodAutoscaler
You are browsing documentation for an older version. See the latest documentation here.

Horizontally autoscale workloads using Datadog
Available with Kong Gateway Enterprise subscription - Contact Sales

Kong Gateway Operator can be integrated with Datadog Metrics in order to use Kong Gateway latency metrics to autoscale workloads based on their metrics.

Install Datadog in your Kubernetes cluster

Datadog API and application keys

To install Datadog agents in your cluster you will need a Datadog API key and an application key. Please refer to this Datadog manual page to obtain those.

Installing

To install Datadog in your cluster, you can follow this guide or use the following values.yaml:

datadog:
  kubelet:
    tlsVerify: false

clusterAgent:
  enabled: true
  # Enable the metricsProvider to be able to scale based on metrics in Datadog
  metricsProvider:
    # Set this to true to enable Metrics Provider
    enabled: true
    # Enable usage of DatadogMetric CRD to autoscale on arbitrary Datadog queries
    useDatadogMetrics: true

  prometheusScrape:
    enabled: true
    serviceEndpoints: true

agents:
  containers:
    agent:
      env:
      - name: DD_HOSTNAME
        valueFrom:
          fieldRef:
            fieldPath: spec.nodeName

to install Datadog’s helm chart:

helm repo add datadog https://helm.datadoghq.com
helm repo update
helm install -n datadog datadog --set datadog.apiKey=${DD_APIKEY} --set datadog.AppKey=${DD_APPKEY} datadog/datadog

Send traffic

To trigger autoscaling, run the following command in a new terminal window. This will cause the underlying deployment to sleep for 100ms on each request and thus increase the average response time to that value.

while curl -k "http://$(kubectl get gateway kong -o custom-columns='name:.status.addresses[0].value' --no-headers -n default)/echo/shell?cmd=sleep%200.1" ; do sleep 1; done

Keep this running while we move on to next steps.

Annotate Kong Gateway Operator with Datadog checks config

Note: Kong Gateway Operator uses kube-rbac-proxy to secure its endpoints behind an RBAC proxy. This is why we scrape kube-rbac-proxy and not the manager container.

Add the following annotation on Kong Gateway Operator’s Pod to tell Datadog how to scrape Kong Gateway Operator’s metrics:

ad.datadoghq.com/kube-rbac-proxy.checks: |
  {
    "openmetrics": {
      "instances": [
        {
          "bearer_token_auth": true,
          "bearer_token_path": "/var/run/secrets/kubernetes.io/serviceaccount/token",
          "tls_verify": false,
          "tls_ignore_warning": true,
          "prometheus_url": "https://%%host%%:8443/metrics",
          "namespace": "autoscaling",
          "metrics": [
              "kong_upstream_latency_ms_bucket",
              "kong_upstream_latency_ms_sum",
              "kong_upstream_latency_ms_count",
            ],
          "send_histograms_buckets": true,
          "send_distribution_buckets": true
        }
      ]
    }
  }

After applying the above you should see avg:autoscaling.kong_upstream_latency_ms{service:echo} metrics in your Datadog Metrics explorer.

Expose Datadog metrics to Kubernetes

To use an external metric in HorizontalPodAutoscaler, we need to configure the Datadog agent to expose it.

There are several ways to achieve this but we’ll use a Kubernetes native way and use the DatadogMetric CRD:

echo '
apiVersion: datadoghq.com/v1alpha1
kind: DatadogMetric
metadata:
  name: echo-kong-upstream-latency-ms-avg
  namespace: default
spec:
  query: autoscaling.kong_upstream_latency_ms{service:echo} ' | kubectl apply -f -

You can check the status of DatadogMetric with:

kubectl get -n default datadogmetric echo-kong-upstream-latency-ms-avg -w

Which should look like this:

NAME                                ACTIVE   VALID   VALUE               REFERENCES         UPDATE TIME
echo-kong-upstream-latency-ms-avg   True     True    104.46194839477539                     38s

You should be able to get the metric via Kubernetes External Metrics API within 30 seconds:

kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/default/datadogmetric@default:echo-kong-upstream-latency-ms-avg" | jq
{
  "kind": "ExternalMetricValueList",
  "apiVersion": "external.metrics.k8s.io/v1beta1",
  "metadata": {},
  "items": [
    {
      "metricName": "datadogmetric@default:echo-kong-upstream-latency-ms-avg",
      "metricLabels": null,
      "timestamp": "2024-03-08T18:03:02Z",
      "value": "104233138021n"
    }
  ]
}

Note: 104233138021n is a Kubernetes way of expressing numbers as integers. Since value here represents latency in milliseconds, it is approximately equivalent to 104.23ms.

Use DatadogMetric in HorizontalPodAutoscaler

When we have the metric already available in Kubernetes External API we can use it in HPA like so:

The echo-kong-upstream-latency-ms-avg DatadogMetric from default namespace can be used by the Kubernetes HorizontalPodAutoscaler to autoscale our workload: specifically the echo Deployment.

The following manifest will scale the underlying echo Deployment between 1 and 10 replicas, trying to keep the average latency across last 30s at 40ms.

echo '
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: echo
  namespace: default
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: echo
  minReplicas: 1
  maxReplicas: 10
  behavior:
    scaleDown:
      stabilizationWindowSeconds: 1
      policies:
      - type: Percent
        value: 100
        periodSeconds: 10
    scaleUp:
      stabilizationWindowSeconds: 1
      policies:
      - type: Percent
        value: 100
        periodSeconds: 2
      - type: Pods
        value: 4
        periodSeconds: 2
      selectPolicy: Max

  metrics:
  - type: External
    external:
      metric:
        name: datadogmetric@default:echo-kong-upstream-latency-ms-avg
      target:
        type: Value
        value: 40 ' | kubectl apply -f -

When everything is configured correctly, DatadogMetric’s status will update and it will now have a reference to the HorizontalPodAutoscaler:

Get the DatadogMetric using kubectl:

kubectl get -n default datadogmetric echo-kong-upstream-latency-ms-avg -w

You will see the HPA reference in the output:

NAME                                ACTIVE   VALID   VALUE               REFERENCES         UPDATE TIME
echo-kong-upstream-latency-ms-avg   True     True    104.46194839477539  hpa:default/echo  38s

If everything went well we should see the SuccessfulRescale events:

12m          Normal   SuccessfulRescale   horizontalpodautoscaler/echo   New size: 2; reason: Service metric kong_upstream_latency_ms_30s_average above target
12m          Normal   SuccessfulRescale   horizontalpodautoscaler/echo   New size: 4; reason: Service metric kong_upstream_latency_ms_30s_average above target
12m          Normal   SuccessfulRescale   horizontalpodautoscaler/echo   New size: 8; reason: Service metric kong_upstream_latency_ms_30s_average above target
12m          Normal   SuccessfulRescale   horizontalpodautoscaler/echo   New size: 10; reason: Service metric kong_upstream_latency_ms_30s_average above target

# Then when latency drops
4s          Normal   SuccessfulRescale   horizontalpodautoscaler/echo   New size: 1; reason: All metrics below target
Thank you for your feedback.
Was this page useful?
Too much on your plate? close cta icon
More features, less infrastructure with Kong Konnect. 1M requests per month for free.
Try it for Free
  • Kong
    Powering the API world

    Increase developer productivity, security, and performance at scale with the unified platform for API management, service mesh, and ingress controller.

    • Products
      • Kong Konnect
      • Kong Gateway Enterprise
      • Kong Gateway
      • Kong Mesh
      • Kong Ingress Controller
      • Kong Insomnia
      • Product Updates
      • Get Started
    • Documentation
      • Kong Konnect Docs
      • Kong Gateway Docs
      • Kong Mesh Docs
      • Kong Insomnia Docs
      • Kong Konnect Plugin Hub
    • Open Source
      • Kong Gateway
      • Kuma
      • Insomnia
      • Kong Community
    • Company
      • About Kong
      • Customers
      • Careers
      • Press
      • Events
      • Contact
  • Terms• Privacy• Trust and Compliance
© Kong Inc. 2025