You are browsing documentation for an older version. See the latest documentation here.
Data plane on Kubernetes
On Kubernetes the Dataplane
entity is automatically created for you, and because transparent proxying is used to communicate between the service and the sidecar proxy, no code changes are required in your applications.
The Kong Mesh control plane injects a kuma-sidecar
container into your Pod
’s container. If
you’re not using the CNI, it also injects a kuma-init
into initContainers
to
setup transparent proxying.
You can control whether Kong Mesh automatically injects the data plane proxy by labeling either the Namespace or the Pod with
kuma.io/sidecar-injection=enabled
, e.g.
apiVersion: v1
kind: Namespace
metadata:
name: kuma-example
labels:
# inject Kong Mesh sidecar into every Pod in that Namespace,
# unless a user explicitly opts out on per-Pod basis
kuma.io/sidecar-injection: enabled
To opt out of data-plane injection into a particular Pod
, you need to label it
with kuma.io/sidecar-injection=disabled
, e.g.
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-app
namespace: kuma-example
spec:
...
template:
metadata:
...
labels:
# indicate to Kong Mesh that this Pod doesn't need a sidecar
kuma.io/sidecar-injection: disabled
spec:
containers:
...
In previous versions the recommended way was to use annotations. While annotations are still supported, we strongly recommend using labels. This is the only way to guarantee that applications can only be started with sidecar.
Once your pod is running you can see the data plane CRD that matches it using kubectl
:
kubectl get dataplanes <podName>
Tag generation
When Dataplane
entities are automatically created, all labels from Pod are converted into Dataplane
tags.
Labels with keys that contains kuma.io/
are not converted because they are reserved to Kong Mesh.
The following tags are added automatically and cannot be overridden using Pod labels.
-
kuma.io/service
: Identifies the service name based on a Service that selects a Pod. This will be of format<name>_<namespace>_svc_<port>
where<name>
,<namespace>
and<port>
are from the Kubernetes service that is associated with this particular pod. When a pod is spawned without being associated with any Kubernetes Service resource the data plane tag will bekuma.io/service: <name>_<namespace>_svc
, where<name>
and<namespace>
are extracted from the Pod resource metadata. -
kuma.io/zone
: Identifies the zone name in a multi-zone deployment. -
kuma.io/protocol
: Identifies the protocol that was defined by theappProtocol
field on the Service that selects the Pod. -
k8s.kuma.io/namespace
: Identifies the Pod’s namespace. Example:kuma-demo
. -
k8s.kuma.io/service-name
: Identifies the name of Kubernetes Service that selects the Pod. Example:demo-app
. -
k8s.kuma.io/service-port
: Identifies the port of Kubernetes Service that selects the Pod. Example:80
.
- If a Kubernetes service exposes more than 1 port, multiple inbounds will be generated all with different
kuma.io/service
.- If a pod is attached to more than one Kubernetes service, multiple inbounds will also be generated.
Example
apiVersion: v1
kind: Pod
metadata:
name: my-app
namespace: my-namespace
labels:
foo: bar
app: my-app
spec:
# ...
---
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: my-namespace
spec:
selector:
app: my-app
type: ClusterIP
ports:
- name: port1
protocol: TCP
appProtocol: http
port: 80
targetPort: 8080
- name: port2
protocol: TCP
appProtocol: grpc
port: 1200
targetPort: 8081
---
apiVersion: v1
kind: Service
metadata:
name: my-other-service
namespace: my-namespace
spec:
selector:
foo: bar
type: ClusterIP
ports:
- protocol: TCP
appProtocol: http
port: 81
targetPort: 8080
Will generate the following inbounds in your Kong Mesh dataplane:
...
inbound:
- port: 8080
tags:
kuma.io/protocol: http
kuma.io/service: my-service_my-namespace_svc_80
k8s.kuma.io/service-name: my-service
k8s.kuma.io/service-port: "80"
k8s.kuma.io/namespace: my-namespace
# Labels coming from your pod
app: my-app
foo: bar
- port: 8081
tags:
kuma.io/protocol: grpc
kuma.io/service: my-service_my-namespace_svc_1200
k8s.kuma.io/service-name: my-service
k8s.kuma.io/service-port: "1200"
k8s.kuma.io/namespace: my-namespace
# Labels coming from your pod
app: my-app
foo: bar
- port: 8080
tags:
kuma.io/protocol: http
kuma.io/service: my-other-service_my-namespace_svc_81
k8s.kuma.io/service-name: my-other-service
k8s.kuma.io/service-port: "81"
k8s.kuma.io/namespace: my-namespace
# Labels coming from your pod
app: my-app
foo: bar
Notice how kuma.io/service
is built on <serviceName>_<namespace>_svc_<port>
and kuma.io/protocol
is the appProtocol
field of your service entry.
Capabilities
The only required
capability for the sidecar is NET_BIND_SERVICE
.
Use ContainerPatch
to
control capabilities for the sidecar.
Lifecycle
Joining the mesh
On Kubernetes, Dataplane
resource is automatically created by kuma-cp. For each Pod
with sidecar-injection label, a new
Dataplane
resource will be created.
To join the mesh in a graceful way, we need to first make sure the application is ready to serve traffic before it can be considered a valid traffic destination.
Init containers
Due to the way that Kong Mesh implements transparent proxying and sidecars in Kubernetes, network calls from init containers while running a mesh can be a challenge.
Network calls to outside of the mesh
The common pitfall is the idea that it’s possible to order init containers so that the mesh init container is run after other init containers. However, when injecting these init containers into a Pod via webhooks, such as the Vault init container, there is no assurance of the order. The ordering of init containers also doesn’t provide a solution when the Kong Mesh CNI is used, as traffic redirection to the sidecar occurs even before any init container runs.
To solve this issue, start the init container with a specific user ID and exclude specific ports from interception.
Remember also about excluding port of DNS interception. Here is an example of annotations to enable HTTPS traffic for a container running as user id 1234
.
apiVersion: v1
king: Deployment
metadata:
name: my-deployment
spec:
template:
metadata:
annotations:
traffic.kuma.io/exclude-outbound-tcp-ports-for-uids: "443:1234"
traffic.kuma.io/exclude-outbound-udp-ports-for-uids: "53:1234"
spec:
initContainers:
- name: my-init-container
...
securityContext:
runAsUser: 1234
Network calls inside the mesh with mTLS enabled
In this scenario, using the init container is simply impossible
because kuma-dp
is responsible for encrypting the traffic and only runs after all init containers have exited.
Leaving the mesh
To leave the mesh in a graceful shutdown, we need to remove the traffic destination from all the clients before shutting it down.
When the Kong Mesh sidecar receives a SIGTERM signal it:
- Starts draining Envoy listeners.
- Waits the entire drain time.
- Terminates.
While draining, Envoy can still accept connections, however:
- It is marked unhealthy on the Envoy Admin
/ready
endpoint. - It sends
connection: close
for HTTP/1.1 requests and theGOAWAY
frame for HTTP/2. This forces clients to close their connection and reconnect to the new instance.
You can read the Kubernetes docs to learn how Kubernetes handles the Pod
lifecycle. Here is the summary including the parts relevant for Kong Mesh.
Whenever a user or system deletes a Pod
, Kubernetes does the following:
- It marks the
Pod
as terminated. - For every container concurrently it:
- Executes any pre stop hook if defined.
- Sends a SIGTERM signal.
- Waits until container is terminated for maximum of graceful termination time (by default 60s).
- Sends a SIGKILL to the container.
- It removes the
Pod
object from the system.
When Pod
is marked as terminated, Kong Mesh, the CP marks the Dataplane
object unhealthy, which triggers a configuration update to all the clients in order to remove it as a destination.
This can take a couple of seconds depending on the size of the mesh, resources available to the CP, XDS configuration interval, etc.
If the application served by the Kong Mesh sidecar quits immediately after the SIGTERM signal, there is a high chance that clients will still try to send traffic to this destination.
To mitigate this, we need to either
- Support graceful shutdown in the application. For example, the application should wait X seconds to exit after receiving the first SIGTERM signal.
- Add a pre-stop hook to postpone stopping the application container. Example:
apiVersion: apps/v1 kind: Deployment metadata: name: redis spec: template: spec: containers: - name: redis image: "redis" lifecycle: preStop: exec: command: ["/bin/sleep", "15"]
When a Pod
is deleted, its matching Dataplane
resource is deleted as well. This is possible thanks to the
owner reference set on the Dataplane
resource.
Custom Container Configuration
If you want to modify the default container configuration you can use
the ContainerPatch
Kubernetes CRD. It allows configuration of both sidecar
and init containers. ContainerPatch
resources are namespace scoped and can
only be applied in a namespace where Kong Mesh CP is running.
In the vast majority of cases you shouldn’t need to override the sidecar and init container configurations.
ContainerPatch
is a feature which requires good understanding of both Kong Mesh and Kubernetes.
A ContainerPatch
specification consists of the list of JSON patch
strings that describe the modifications. Consult the entire
resource schema.
Example
When using ContainerPath, every
value
field must be a string containing valid JSON.
apiVersion: kuma.io/v1alpha1
kind: ContainerPatch
metadata:
name: container-patch-1
namespace: kong-mesh-system
spec:
sidecarPatch:
- op: add
path: /securityContext/privileged
value: "true"
- op: add
path: /resources/requests/cpu
value: '"100m"'
- op: add
path: /resources/limits
value: '{
"cpu": "500m",
"memory": "256Mi"
}'
initPatch:
- op: add
path: /securityContext/runAsNonRoot
value: "true"
- op: remove
path: /securityContext/runAsUser
This will change the securityContext
section of kuma-sidecar
container from:
securityContext:
runAsGroup: 5678
runAsUser: 5678
to:
securityContext:
runAsGroup: 5678
runAsUser: 5678
privileged: true
and similarly change the securityContext section of the init container from:
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
runAsGroup: 0
runAsUser: 0
to:
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
runAsGroup: 0
runAsNonRoot: true
Resources requests cpu
will be changed from:
requests: │
cpu: 50m
to:
requests: │
cpu: 100m
Resources limits
will be changed from:
limits:
cpu: 1000m
memory: 512Mi
to:
limits:
cpu: 500m
memory: 256Mi
Workload matching
A ContainerPatch
is matched to a Pod
via an kuma.io/container-patches
annotation on the workload. Each annotation may be an ordered list of
ContainerPatch
names, which will be applied in the order specified.
If a workload refers to a
ContainerPatch
which does not exist, the injection will explicitly fail and log the failure.
Example
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: app-ns
name: app-deployment
spec:
replicas: 1
selector:
matchLabels:
app: app-deployment
template:
metadata:
labels:
app: app-deployment
annotations:
kuma.io/container-patches: container-patch-1,container-patch-2
spec: [...]
Default patches
You can configure kuma-cp
to apply the list of default patches for workloads
which don’t specify their own patches by modifying the containerPatches
value
from the kuma-dp
configuration:
[...]
runtime:
kubernetes:
injector:
containerPatches: [ ]
[...]
If you specify the list of default patches (i.e.
["default-patch-1", "default-patch-2]
) but your workload will be annotated with its own list of patches (i.e.["pod-patch-1", "pod-patch-2]
) only the latter will be applied.
To install a CP with env vars you can do:
kumactl install control-plane --env-var "KUMA_RUNTIME_KUBERNETES_INJECTOR_CONTAINER_PATCHES=patch1,patch2"
Error modes and validation
When applying ContainerPatch
Kong Mesh will validate that the rendered container
spec meets the Kubernetes specification. Kong Mesh will not validate that it is
a sane configuration.
If a workload refers to a ContainerPatch
which does not exist, the injection
will explicitly fail and log the failure.
Direct access to services
By default, on Kubernetes data plane proxies communicate with each other by leveraging the ClusterIP
address of the Service
resources. Also by default, any request made to another service is automatically load balanced client-side by the data plane proxy that originates the request (they are load balanced by the local Envoy proxy sidecar proxy).
There are situations where we may want to bypass the client-side load balancing and directly access services by using their IP address (ie: in the case of Prometheus wanting to scrape metrics from services by their individual IP address).
When an originating service wants to directly consume other services by their IP address, the originating service’s Deployment
resource must include the following annotation:
kuma.io/direct-access-services: Service1, Service2, ServiceN
Where the value is a comma separated list of Kong Mesh services that will be consumed directly. For example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-app
namespace: kuma-example
spec:
...
template:
metadata:
...
annotations:
kuma.io/direct-access-services: "backend_example_svc_1234,backend_example_svc_1235"
spec:
containers:
...
Note: When using direct access with headless service, destination service will be accessible at:
Kong Mesh-service.pod-name.mesh
We can also use *
to indicate direct access to every service in the Mesh:
kuma.io/direct-access-services: *
Using
*
to directly access every service is a resource intensive operation, so we must use it carefully.