You are browsing unreleased documentation.
Looking for the plugin's configuration parameters? You can find them in the OpenTelemetry configuration reference doc.
Propagate distributed tracing spans and report low-level spans to a OTLP-compatible server.
The OpenTelemetry plugin is fully compatible with the OpenTelemetry specification and can be used with any OpenTelemetry compatible backend.
How it works
This section describes how the OpenTelemetry plugin works.
Collecting telemetry data
There are two ways to set up an OpenTelemetry backend:
- Using a OpenTelemetry compatible backend directly, like Jaeger (v1.35.0+) All the vendors supported by OpenTelemetry are listed in the OpenTelemetry’s Vendor support.
- Using the OpenTelemetry Collector, which is middleware that can be used to proxy OpenTelemetry spans to a compatible backend. You can view all the available OpenTelemetry Collector exporters at open-telemetry/opentelemetry-collector-contrib.
Metrics
Metrics are enabled using the contrib
version of the OpenTelemetry Collector.
The spanmetrics
connector allows you to aggregate traces and provide metrics to any third party observability platform.
To include span metrics for application traces, configure the collector exporters section of the OpenTelemetry Collector configuration file:
connectors:
spanmetrics:
dimensions:
- name: http.method
default: GET
- name: http.status_code
- name: http.route
exclude_dimensions:
- status.code
metrics_flush_interval: 15s
histogram:
disable: false
service:
pipelines:
traces:
receivers: [otlp]
processors: []
exporters: [spanmetrics]
metrics:
receivers: [spanmetrics]
processors: []
exporters: [otlphttp]
Tracing
Built-in tracing instrumentations
Kong Gateway has a series of built-in tracing instrumentations
which are configured by the tracing_instrumentations
configuration.
Kong Gateway creates a top-level span for each request by default when tracing_instrumentations
is enabled.
The top level span has the following attributes:
-
http.method
: HTTP method -
http.url
: HTTP URL -
http.host
: HTTP host -
http.scheme
: HTTP scheme (http or https) -
http.flavor
: HTTP version -
net.peer.ip
: Client IP address
Propagation
The OpenTelemetry plugin supports propagation of the following header formats:
-
w3c
: W3C trace context -
b3
andb3-single
: Zipkin headers -
jaeger
: Jaeger headers -
ot
: OpenTracing headers -
datadog
: Datadog headers -
aws
: AWS X-Ray header -
gcp
: GCP X-Cloud-Trace-Context header
This plugin offers extensive options for configuring tracing header propagation, providing a high degree of flexibility. Users can freely customize which headers are used to extract and inject tracing context. Additionally, they have the ability to configure headers to be cleared after the tracing context extraction process, enabling a high level of customization.
flowchart LR id1(Original Request) --> Extract id1(Original Request) -->|"headers (original)"| Extract id1(Original Request) --> Extract subgraph ide1 [Headers Propagation] Extract --> Clear Extract -->|"headers (original)"| Clear Extract --> Clear Clear -->|"headers (filtered)"| Inject end Extract -.->|extracted ctx| id2((tracing logic)) id2((tracing logic)) -.->|updated ctx| Inject Inject -->|"headers (updated ctx)"| id3(Updated request)
The following examples demonstrate how the propagation configuration options can be used to achieve various use cases.
Extract, clear, and inject
- Extract the tracing context using order of precedence:
w3c
>b3
>jaeger
>ot
>aws
>datadog
- Clear
b3
anduber-trace-id
headers after extraction, if present in the request - Inject the tracing context using the format:
w3c
- config:
propagation:
extract: [ w3c, b3, jaeger, ot, aws, datadog ]
clear: [ b3, uber-trace-id ]
inject: [ w3c ]
Multiple injection
- Extract the tracing context from:
b3
- Inject the tracing context using the formats:
w3c
,b3
,jaeger
,ot
,aws
,datadog
,gcp
- config:
propagation:
extract: [ b3 ]
inject: [ w3c, b3, jaeger, ot, aws, datadog, gcp ]
Preserve incoming format
- Extract the tracing context using order of precedence:
w3c
>b3
>jaeger
>ot
>aws
>datadog
- Inject the tracing context in the extracted header type
- Default to
w3c
for context injection if none of theextract
header types were found in the request
- config:
propagation:
extract: [ w3c, b3, jaeger, ot, aws, datadog ]
inject: [ preserve ]
default_format: "w3c"
preserve
can also be used with other formats, to specify that the incoming format should be preserved in addition to the others:
- config:
propagation:
extract: [ w3c, b3, jaeger, ot, datadog ]
inject: [ aws, preserve, datadog ]
default_format: "w3c"
Ignore incoming headers
- No tracing context extraction
- Inject the tracing context using the formats:
b3
,datadog
- config:
propagation:
extract: [ ]
inject: [ b3, datadog ]
Note: Some header formats specify different trace and span ID sizes. When the tracing context is extracted and injected from/to headers with different ID sizes, the IDs are truncated or left-padded to align with the target format.
Refer to the plugin’s configuration reference for a complete overview of the available options and values.
Note: If any of the
propagation.*
configuration options (extract
,clear
, orinject
) are configured, thepropagation
configuration takes precedence over the deprecatedheader_type
parameter. If none of thepropagation.*
configuration options are set, theheader_type
parameter is still used to determine the propagation behavior.
OTLP exporter
The OpenTelemetry plugin implements the OTLP/HTTP exporter, which uses Protobuf payloads encoded in binary format and is sent via HTTP/1.1.
connect_timeout
, read_timeout
, and write_timeout
are used to set the timeouts for the HTTP request.
batch_span_count
and batch_flush_delay
are used to set the maximum number of spans and the delay between two consecutive batches.
Logging
This plugin supports OpenTelemetry Logging, which can be configured as described in the configuration reference to export logs in OpenTelemetry format to an OTLP-compatible backend.
Log scopes
Two different kinds of logs are exported: Request and Non-Request scoped.
- Request logs are directly associated with requests. They are produced during the request lifecycle. For example, this could be logs generated during a plugin’s Access phase.
- Non-request logs are not directly associated with a request. They are produced outside the request lifecycle. For examples, this could be logs generated asynchronously or during a worker’s startup.
Log level
Logs are reported based on the log level that is configured for Kong Gateway. If a log is emitted with a level that is lower than the configured log level, it is not exported.
Note: Not all logs are guaranteed to be exported. Logs that are not exported include those produced by the Nginx master process and low-level errors produced by Nginx. Operators are expected to capture the Nginx
error.log
file in addition to using this feature for observability purposes.
Log entry
Each log entry adheres to the OpenTelemetry Logs Data Model. The available information depends on the log scope and on whether tracing is enabled for this plugin.
Every log entry includes the following fields:
-
Timestamp
: Time when the event occurred. -
ObservedTimestamp
: Time when the event was observed. -
SeverityText
: The severity text (log level). -
SeverityNumber
: Numerical value of the severity. -
Body
: The error log line. -
Resource
: Configurable resource attributes. -
InstrumentationScope
: Metadata that describes Kong’s data emitter. -
Attributes
: Additional information about the event.-
introspection.source
: Full path of the file that emitted the log. -
introspection.current.line
: Line number that emitted the log.
-
In addition to the above, request-scoped logs include:
-
Attributes
: Additional information about the event.-
request.id
: Kong’s request ID.
-
In addition to the above, when tracing is enabled, request-scoped logs include:
-
TraceID
: Request trace ID. -
SpanID
: Request span ID. -
TraceFlags
: W3C trace flag.
Queueing
The OpenTelemetry plugin uses a queue to decouple the production and consumption of data. This reduces the number of concurrent requests made to the upstream server under high load situations and provides buffering during temporary network or upstream outages.
You can set several parameters to configure the behavior and capacity of the queues used by the plugin. For more information about how to use these parameters, see Plugin Queuing Reference in the Kong Gateway documentation.
The queue parameters all reside in a record under the key queue
in
the config
parameter section of the plugin.
Queues are not shared between workers and queueing parameters are scoped to one worker. For whole-system capacity planning, the number of workers need to be considered when setting queue parameters.
Trace IDs in serialized logs
When the OpenTelemetry plugin is configured along with a plugin that uses the
Log Serializer,
the trace ID of each request is added to the key trace_id
in the serialized log output.
The value of this field is an object that can contain different formats
of the current request’s trace ID. In case of multiple tracing headers in the
same request, the trace_id
field includes one trace ID format
for each different header format, as in the following example:
"trace_id": {
"w3c": "4bf92f3577b34da6a3ce929d0e0e4736",
"datadog": "11803532876627986230"
},
Troubleshooting
The OpenTelemetry spans are printed to the console when the log level is set to debug
in the Kong configuration file.
An example of debug logs output:
2022/06/02 15:28:42 [debug] 650#0: *111 [lua] instrumentation.lua:302: runloop_log_after(): [tracing] collected 6 spans:
Span #1 name=GET /wrk duration=1502.994944ms attributes={"http.url":"/wrk","http.method":"GET","http.flavor":1.1,"http.host":"127.0.0.1","http.scheme":"http","net.peer.ip":"172.18.0.1"}
Span #2 name=rewrite phase: opentelemetry duration=0.391936ms
Span #3 name=router duration=0.013824ms
Span #4 name=access phase: cors duration=1500.824576ms
Span #5 name=cors: heavy works duration=1500.709632ms attributes={"username":"kongers"}
Span #6 name=balancer try #1 duration=0.99328ms attributes={"net.peer.ip":"104.21.11.162","net.peer.port":80}
Known issues
- Only supports the HTTP protocols (http/https) of Kong Gateway.
- May impact the performance of Kong Gateway.
It’s recommended to set the sampling rate (
tracing_sampling_rate
) via Kong configuration file when using the OpenTelemetry plugin.