In order to give you better service we use cookies. By continuing to use our website, you agree to the use of cookies as described in our Cookie Policy

Are you a Kong Gateway user? We'd love your feedback. Take the Survey

Kong Logo
  • Get Started
  • Products
    • kong-enterprise Kong Enterprise
      • Kong Enterprise

        End-to-end connectivity platform

      • Kong Mesh

        Universal service mesh

      • Kong Studio

        Empower your developers

      • Dev Portal

        Accelerate innovation

      • Kong Manager

        Manage all your services

      • Kong Immunity

        Autonomously identify issues

      • Kong for Kubernetes

        Native Kubernetes Ingress Controller

      • Enterprise Plugins

        Instantly implement policies

      • Kong Vitals

        Monitor your Kong Enterprise

      • Get Started
    • Built on an Open-source Foundation
      • kong-gateway Kong Gateway

        API Gateway

      • kuma Kuma

        Service Mesh

      • insomnia Insomnia

        API Design and Testing

      • Install
    • Kubernetes & Subscriptions
      • Kubernetes Ingress Controller

        Ingress and CRDs

      • Subscriptions

        Kong Gateway and Enterprise features

  • Solutions
    • Use Cases
      • Decentralize Apps and Services

        Accelerate your journey into microservices

      • Secure and Govern APIs

        Empower teams to provide security, governance and compliance

      • Create a Developer Platform

        Rapidly design, publish and consume APIs and services

    • Deployment Patterns
      • API Gateway

        Take control of your microservices with the world’s most popular API gateway

      • Kubernetes

        Own your Kubernetes cluster by using Kong as an Ingress Controller

      • Service Mesh

        Build, secure and observe your modern Service Mesh

  • Plugins
  • Open Source
    • Install Kong Gateway
    • Kong Community
    • Kubernetes Ingress
    • Kuma
    • Insomnia
  • Docs
    • Documentation
      • Kong Gateway
      • Kong Enterprise
      • Kong Mesh
      • Kong Studio
      • Plugins Hub
  • Resources
    • Learning
      • eBooks
      • Webinars
      • Briefs
      • Blog
    • Community
      • Community
      • Kong Nation
      • Kong Summit
      • GitHub
    • Support
      • Enterprise Support Portal
      • FAQS
      • Kong Professional Services
      • Kong University
  • Company
    • About
    • Customers
    • Investors
    • Careers
    • Partners
    • Press
    • Contact
  • Get Started
header icon

Prometheus

  • 0.9.x (latest)
  • 0.8.x
  • 0.1-x
  • Back to Kong Plugin Hub
  • githubEdit this page
  • report-issueReport an issue
  • Configuration Reference
    • Enabling the plugin on a Service
    • Enabling the plugin globally
    • Parameters
    • Grafana dashboard
    • Available metrics
    • Accessing the metrics
About this Plugin
Made by
Categories
  • Analytics & Monitoring
DB-less compatible? Yes (Kong Gateway only)
Bundled with...
    Kong Enterprise
  • 2.2.x
  • 2.1.x
  • 1.5.x
  • 1.3-x
  • 0.36-x
    Kong Community
  • 2.2.x
  • 2.1.x
  • 2.0.x
  • 1.5.x
  • 1.4.x
  • 1.3.x
  • 1.2.x
  • 1.1.x
  • 1.0.x
  • 0.14.x

Expose metrics related to Kong and proxied Upstream services in Prometheus exposition format, which can be scraped by a Prometheus Server.

Configuration Reference

You can configure this plugin using the Kong Admin API or through declarative configuration, which involves directly editing the Kong configuration file.

This plugin is compatible with requests with the following protocols:

  • http
  • https
  • tcp
  • tls
  • grpc
  • grpcs

This plugin is compatible with DB-less mode.

In DB-less mode, Kong does not have an Admin API. If using this mode, configure the plugin using declarative configuration.

The database will always be reported as reachable in Prometheus with DB-less.

Enabling the plugin on a Service

Kong Admin API
Kubernetes
Declarative (YAML)

For example, configure this plugin on a Service by making the following request:

$ curl -X POST http://<admin-hostname>:8001/services/<service>/plugins \
    --data "name=prometheus" 

First, create a KongPlugin resource:

apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: <prometheus-example>
config:
  <optional_parameter>: <value>
plugin: prometheus

Next, apply the KongPlugin resource to a Service by annotating the Service as follows:

apiVersion: v1
kind: Service
metadata:
  name: <service>
  labels:
    app: <service>
  annotations:
    konghq.com/plugins: <prometheus-example>
spec:
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    name: <service>
  selector:
    app: <service>
Note: The KongPlugin resource only needs to be defined once and can be applied to any Service, Consumer, or Route in the namespace. If you want the plugin to be available cluster-wide, create the resource as a KongClusterPlugin instead of KongPlugin.

For example, configure this plugin on a Service by adding this section to your declarative configuration file:

plugins:
- name: prometheus
  service: <service>
  config:
    <optional_parameter>: <value>

<service> is the id or name of the Service that this plugin configuration will target.

Enabling the plugin globally

A plugin which is not associated to any Service, Route, or Consumer is considered global, and will be run on every request. Read the Plugin Reference and the Plugin Precedence sections for more information.

Kong Admin API
Kubernetes
Declarative (YAML)

For example, configure this plugin globally with:

$ curl -X POST http://<admin-hostname>:8001/plugins/ \
    --data "name=prometheus" 

Create a KongClusterPlugin resource and label it as global:

apiVersion: configuration.konghq.com/v1
kind: KongClusterPlugin
metadata:
  name: <global-prometheus>
  annotations:
    kubernetes.io/ingress.class: kong
  labels:
    global: \"true\"
config:
  <optional_parameter>: <value>
plugin: prometheus

For example, configure this plugin using the plugins: entry in the declarative configuration file:

plugins:
- name: prometheus
  config:
    <optional_parameter>: <value>

Parameters

Here's a list of all the parameters which can be used in this plugin's configuration:

Form ParameterDescription
name

Type: string
The name of the plugin to use, in this case prometheus.
service.id

Type: string
The ID of the Service the plugin targets.
enabled

Type: boolean

Default value: true
Whether this plugin will be applied.

Metrics are available on both the Admin API and Status API at the http://localhost:<port>/metrics endpoint. Note that the URL to those APIs will be specific to your installation; see Accessing the metrics below.

This plugin records and exposes metrics at the node level. Your Prometheus server will need to discover all Kong nodes via a service discovery mechanism, and consume data from each node’s configured /metrics endpoint.

Grafana dashboard

Metrics exported by the plugin can be graphed in Grafana using a drop in dashboard: https://grafana.com/dashboards/7424.

Available metrics

  • Status codes: HTTP status codes returned by Upstream services. These are available per service and across all services.
  • Latencies Histograms: Latency as measured at Kong:
    • Request: Total time taken by Kong and Upstream services to serve requests.
    • Kong: Time taken for Kong to route a request and run all configured plugins.
    • Upstream: Time taken by the Upstream service to respond to requests.
  • Bandwidth: Total Bandwidth (egress/ingress) flowing through Kong. This metric is available per service and as a sum across all services.
  • DB reachability: A gauge type with a value of 0 or 1, which represents whether DB can be reached by a Kong node.
  • Connections: Various Nginx connection metrics like active, reading, writing, and number of accepted connections.
  • Target Health: The healthiness status (healthchecks_off, healthy, unhealthy, or dns_error) of Targets belonging to a given Upstream.

Here is an example of output you could expect from the /metrics endpoint:

$ curl -i http://localhost:8001/metrics
HTTP/1.1 200 OK
Server: openresty/1.15.8.3
Date: Tue, 7 Jun 2020 16:35:40 GMT
Content-Type: text/plain; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Access-Control-Allow-Origin: *

# HELP kong_bandwidth_total Total bandwidth in bytes for all proxied requests in Kong
# TYPE kong_bandwidth_total counter
kong_bandwidth_total{type="egress"} 1277
kong_bandwidth_total{type="ingress"} 254
# HELP kong_bandwidth Total bandwidth in bytes consumed per service in Kong
# TYPE kong_bandwidth counter
kong_bandwidth{type="egress",service="google"} 1277
kong_bandwidth{type="ingress",service="google"} 254
# HELP kong_datastore_reachable Datastore reachable from Kong, 0 is unreachable
# TYPE kong_datastore_reachable gauge
kong_datastore_reachable 1
# HELP kong_http_status_total HTTP status codes aggreggated across all services in Kong
# TYPE kong_http_status_total counter
kong_http_status_total{code="301"} 2
# HELP kong_http_status HTTP status codes per service in Kong
# TYPE kong_http_status counter
kong_http_status{code="301",service="google"} 2
# HELP kong_latency Latency added by Kong, total request time and upstream latency for each service in Kong
# TYPE kong_latency histogram
kong_latency_bucket{type="kong",service="google",le="00001.0"} 1
kong_latency_bucket{type="kong",service="google",le="00002.0"} 1
.
.
.
kong_latency_bucket{type="kong",service="google",le="+Inf"} 2
kong_latency_bucket{type="request",service="google",le="00300.0"} 1
kong_latency_bucket{type="request",service="google",le="00400.0"} 1
.
.
kong_latency_bucket{type="request",service="google",le="+Inf"} 2
kong_latency_bucket{type="upstream",service="google",le="00300.0"} 2
kong_latency_bucket{type="upstream",service="google",le="00400.0"} 2
.
.
kong_latency_bucket{type="upstream",service="google",le="+Inf"} 2
kong_latency_count{type="kong",service="google"} 2
kong_latency_count{type="request",service="google"} 2
kong_latency_count{type="upstream",service="google"} 2
kong_latency_sum{type="kong",service="google"} 2145
kong_latency_sum{type="request",service="google"} 2672
kong_latency_sum{type="upstream",service="google"} 527
# HELP kong_latency_total Latency added by Kong, total request time and upstream latency aggreggated across all services in Kong
# TYPE kong_latency_total histogram
kong_latency_total_bucket{type="kong",le="00001.0"} 1
kong_latency_total_bucket{type="kong",le="00002.0"} 1
.
.
kong_latency_total_bucket{type="kong",le="+Inf"} 2
kong_latency_total_bucket{type="request",le="00300.0"} 1
kong_latency_total_bucket{type="request",le="00400.0"} 1
.
.
kong_latency_total_bucket{type="request",le="+Inf"} 2
kong_latency_total_bucket{type="upstream",le="00300.0"} 2
kong_latency_total_bucket{type="upstream",le="00400.0"} 2
.
.
.
kong_latency_total_bucket{type="upstream",le="+Inf"} 2
kong_latency_total_count{type="kong"} 2
kong_latency_total_count{type="request"} 2
kong_latency_total_count{type="upstream"} 2
kong_latency_total_sum{type="kong"} 2145
kong_latency_total_sum{type="request"} 2672
kong_latency_total_sum{type="upstream"} 527
# HELP kong_nginx_http_current_connections Number of HTTP connections
# TYPE kong_nginx_http_current_connections gauge
kong_nginx_http_current_connections{state="accepted"} 8
kong_nginx_http_current_connections{state="active"} 1
kong_nginx_http_current_connections{state="handled"} 8
kong_nginx_http_current_connections{state="reading"} 0
kong_nginx_http_current_connections{state="total"} 8
kong_nginx_http_current_connections{state="waiting"} 0
kong_nginx_http_current_connections{state="writing"} 1
# HELP kong_nginx_metric_errors_total Number of nginx-lua-prometheus errors
# TYPE kong_nginx_metric_errors_total counter
kong_nginx_metric_errors_total 0
# HELP kong_upstream_target_health Health status of targets of upstream. States = healthchecks_off|healthy|unhealthy|dns_error, value is 1 when state is populated.
kong_upstream_target_health{upstream="<upstream_name>",target="<target>",address="<ip>:<port>",state="healthchecks_off"} 0
kong_upstream_target_health{upstream="<upstream_name>",target="<target>",address="<ip>:<port>",state="healthy"} 1
kong_upstream_target_health{upstream="<upstream_name>",target="<target>",address="<ip>:<port>",state="unhealthy"} 0
kong_upstream_target_health{upstream="<upstream_name>",target="<target>",address="<ip>:<port>",state="dns_error"} 0

Accessing the metrics

In most configurations, the Kong Admin API will be behind a firewall or would need to be set up to require authentication. Here are a couple of options to allow access to the /metrics endpoint to Prometheus:

  1. If the Status API is enabled, then its /metrics endpoint can be used. This is the preferred method.

  2. The /metrics endpoint is also available on the Admin API, which can be used if the Status API is not enabled. Note that this endpoint is unavailable when RBAC is enabled on the Admin API (Prometheus does not support Key-Auth to pass the token).

Thank you for your feedback.
Was this page useful?
  • Kong
    THE CLOUD CONNECTIVITY COMPANY

    Kong powers reliable digital connections across APIs, hybrid and multi-cloud environments.

    • Company
    • Customers
    • Events
    • Investors
    • Careers Hiring!
    • Partners
    • Press
    • Contact
  • Products
    • Kong Gateway
    • Kong Enterprise
    • Get Started
    • Subscriptions
  • Resources
    • eBooks
    • Webinars
    • Briefs
    • Blog
    • API Gateway
    • Microservices
  • Open Source
    • Install Kong Gateway
    • Kong Community
    • Kubernetes Ingress
    • Kuma
    • Insomnia
  • Solutions
    • Decentralize
    • Secure & Govern
    • Create a Dev Platform
    • API Gateway
    • Kubernetes
    • Service Mesh
Star
  • Terms•Privacy
© Kong Inc. 2021