This policy uses new policy matching algorithm and is in beta state, it should not be mixed with TrafficTrace.
This policy enables publishing traces to a third party tracing solution.
Tracing is supported over HTTP, HTTP2, and gRPC protocols. You must explicitly specify the protocol for each service and data plane proxy you want to enable tracing for.
Kong Mesh currently supports the following trace exposition formats:
zipkintraces in this format can be sent to many different tracing backends
Services still need to be instrumented to preserve the trace chain across requests made across different services.
You can instrument with a language library of your choice (for zipkin and for datadog). For HTTP you can also manually forward the following headers:
TargetRef support matrix
|TargetRef type||top level||to||from|
To learn more about the information in this table, see the matching docs.
Most of the time setting only
clientare for advanced use cases.
You can configure sampling settings equivalent to Envoy’s:
The value is always a percentage and is between 0 and 100.
sampling: overall: 80 random: 60 client: 40
You can add tags to trace metadata by directly supplying the value (
literal) or by taking it from a header (
tags: - name: team literal: core - name: env header: name: x-env default: prod - name: version header: name: x-version
If a value is missing for
default is used.
default isn’t provided, then the tag won’t be added.
You can configure a Datadog backend with a
datadog: url: http://my-agent:8080 # Required. The url to reach a running datadog agent splitService: true # Default to false. If true, it will split inbound and outbound requests in different services in Datadog
splitService property determines if Datadog service names should be split based on traffic direction and destination.
For example, with
splitService: true and a
backend service that communicates with a couple of databases,
you would get service names like
backend_OUTBOUND_db2 in Datadog.
In most cases the only field you’ll want to set in
zipkin: url: http://jaeger-collector:9411/api/v2/spans # Required. The url to a zipkin collector to send traces to traceId128bit: false # Default to false which will expose a 64bits traceId. If true, the id of the trace is 128bits apiVersion: httpJson # Default to httpJson. It can be httpJson, httpProto and is the version of the zipkin API sharedSpanContext: false # Default to true. If true, the inbound and outbound traffic will share the same span.
This assumes a Datadog agent is configured and running. If you haven’t already check the Datadog observability page.
Targeting parts of the infrastructure
While usually you want all the traces to be sent to the same tracing backend,
you can target parts of a
Mesh by using a finer-grained
targetRef and a designated backend to trace different paths of our service traffic.
This is especially useful when you want traces to never leave a world region, or a cloud, for example.
In this example, we have two zones
west, each of these with their own Zipkin collector:
We want dataplane proxies in each zone to only send traces to their local collector.
To do this, we use a
TargetRef kind value of
MeshSubset to filter which dataplane proxy a policy applies to.