You are browsing documentation for an older version.
See the latest documentation here.
Deploy a multi-zone global control plane
Prerequisites
To set up a multi-zone deployment we will need to:
Usage
Set up the global control plane
The global control plane must run on a dedicated cluster (unless using “Universal on Kubernetes” mode), and cannot be assigned to a zone.
Kubernetes
Universal on Kubernetes using Helm
Universal
The global control plane on Kubernetes must reside on its own Kubernetes cluster, to keep its resources separate from the resources the zone control planes create during synchronization.
Run:
kumactl install control-plane \
--set "kuma.controlPlane.mode=global" \
| kubectl apply -f -
Before using Kong Mesh with helm, please follow these steps to configure your local helm repo.
helm install --create-namespace --namespace kong-mesh-system \
--set "kuma.controlPlane.mode=global" \
kong-mesh kong-mesh/kong-mesh
Find the external IP and port of the global-zone-sync
service in the kong-mesh-system
namespace:
kubectl get services -n kong-mesh-system
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kong-mesh-system global-zone-sync LoadBalancer 10.105.9.10 35.226.196.103 5685:30685/TCP 89s
kong-mesh-system kong-mesh-control-plane ClusterIP 10.105.12.133 <none> 5681/TCP,443/TCP,5676/TCP,5677/TCP,5678/TCP,5679/TCP,5682/TCP,5653/UDP 90s
By default, it’s exposed on port 5685. In this example the value is 35.226.196.103:5685
. You pass this as the value of <global-kds-address>
when you set up the zone control planes.
Running global control plane in “Universal on Kubernetes” mode means using PostgreSQL as storage instead of Kubernetes.
It means that failover / HA / reliability characteristics will change.
Please read Kubernetes and
PostgreSQL docs for more details.
Before using Kong Mesh with helm, please follow these steps to configure your local helm repo and learn the reference helm configuration values.yaml
.
-
Define Kubernetes secrets with database sensitive information
apiVersion: v1
kind: Secret
metadata:
name: your-secret-name
type: Opaque
data:
POSTGRES_DB: ...
POSTGRES_HOST_RW: ...
POSTGRES_USER: ...
POSTGRES_PASSWORD: ...
-
Create a values.yaml
file with: kuma.controlPlane.environment=universal
and kuma.controlPlane.mode=global
in the chart (values.yaml
).
-
Set kuma.controlPlane.secrets
with database sensitive information
# ...
secrets:
postgresDb:
Secret: your-secret-name
Key: POSTGRES_DB
Env: KUMA_STORE_POSTGRES_DB_NAME
postgresHost:
Secret: your-secret-name
Key: POSTGRES_HOST_RW
Env: KUMA_STORE_POSTGRES_HOST
postgrestUser:
Secret: your-secret-name
Key: POSTGRES_USER
Env: KUMA_STORE_POSTGRES_USER
postgresPassword:
Secret: your-secret-name
Key: POSTGRES_PASSWORD
Env: KUMA_STORE_POSTGRES_PASSWORD
-
Optionally set kuma.postgres
with TLS settings
# ...
# Postgres' settings for universal control plane on k8s
postgres:
# -- Postgres port, password should be provided as a secret reference in "controlPlane.secrets"
# with the Env value "KUMA_STORE_POSTGRES_PASSWORD".
# Example:
# controlPlane:
# secrets:
# - Secret: postgres-postgresql
# Key: postgresql-password
# Env: KUMA_STORE_POSTGRES_PASSWORD
port: "5432"
# TLS settings
tls:
# -- Mode of TLS connection. Available values are: "disable", "verifyNone", "verifyCa", "verifyFull"
mode: disable # ENV: KUMA_STORE_POSTGRES_TLS_MODE
# -- Whether to disable SNI the postgres `sslsni` option.
disableSSLSNI: false # ENV: KUMA_STORE_POSTGRES_TLS_DISABLE_SSLSNI
# -- Secret name that contains the ca.crt
caSecretName:
# -- Secret name that contains the client tls.crt, tls.key
secretName:
-
Run helm install
helm install kong-mesh \
--create-namespace \
--skip-crds \
--namespace kong-mesh-system \
--values values.yaml \
kong-mesh/kong-mesh
-
Find the external IP and port of the global-zone-sync
service in the kong-mesh-system
namespace:
kubectl get services -n kong-mesh-system
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kong-mesh-system global-zone-sync LoadBalancer 10.105.9.10 35.226.196.103 5685:30685/TCP 89s
kong-mesh-system kong-mesh-control-plane ClusterIP 10.105.12.133 <none> 5681/TCP,443/TCP,5676/TCP,5677/TCP,5678/TCP,5679/TCP,5682/TCP,5653/UDP 90s
In this example the value is 35.226.196.103:5685
. You pass this as the value of <global-kds-address>
when you set up the zone control planes.
When running the global control plane in Universal mode, a database must be used to persist state for production deployments.
Ensure that migrations have been run against the database prior to running the global control plane.
-
Set up the global control plane, and add the global
environment variable:
KUMA_MODE=global \
KUMA_ENVIRONMENT=universal \
KUMA_STORE_TYPE=postgres \
KUMA_STORE_POSTGRES_HOST=<postgres-host> \
KUMA_STORE_POSTGRES_PORT=<postgres-port> \
KUMA_STORE_POSTGRES_USER=<postgres-user> \
KUMA_STORE_POSTGRES_PASSWORD=<postgres-password> \
KUMA_STORE_POSTGRES_DB_NAME=<postgres-db-name> \
kuma-cp run
Set up the zone control planes
You need the following values to pass to each zone control plane setup:
-
zone
– the zone name. An arbitrary string. This value registers the zone control plane with the global control plane.
-
kds-global-address
– the external IP and port of the global control plane.
For every individual zone create an own cluster and for each run:
kumactl install control-plane \
--set "kuma.controlPlane.mode=zone" \
--set "kuma.controlPlane.zone=<zone-name>" \
--set "kuma.ingress.enabled=true" \
--set "kuma.controlPlane.kdsGlobalAddress=grpcs://<global-kds-address>:5685" \
--set "kuma.controlPlane.tls.kdsZoneClient.skipVerify=true" \
| kubectl apply -f -
Before using Kong Mesh with helm, please follow these steps to configure your local helm repo.
helm install --create-namespace --namespace kong-mesh-system \
--set "kuma.controlPlane.mode=zone" \
--set "kuma.controlPlane.zone=<zone-name>" \
--set "kuma.ingress.enabled=true" \
--set "kuma.controlPlane.kdsGlobalAddress=grpcs://<global-kds-address>:5685" \
--set "kuma.controlPlane.tls.kdsZoneClient.skipVerify=true" \
kong-mesh kong-mesh/kong-mesh
where kuma.controlPlane.zone
is the same value for all zone control planes in the same zone.
Add --set kuma.egress.enabled=true
to list of arguments if you want to deploy optional Zone Egress.
Set --set kuma.controlPlane.tls.kdsZoneClient.skipVerify=true
because the default global control plane’s certificate is self-signed.
For production use a certificate signed by a trusted CA. See Secure access across services page for more information.
After installing a zone control plane, make sure to restart the application pods that are already running such that the data plane proxies can be connected.
When running the zone control plane in Universal mode, a database must be used to persist state for production deployments.
Ensure that migrations have been run against the database prior to running the zone control plane.
-
On each zone control plane, run:
KUMA_MODE=zone \
KUMA_MULTIZONE_ZONE_NAME=<zone-name> \
KUMA_ENVIRONMENT=universal \
KUMA_STORE_TYPE=postgres \
KUMA_STORE_POSTGRES_HOST=<postgres-host> \
KUMA_STORE_POSTGRES_PORT=<postgres-port> \
KUMA_STORE_POSTGRES_USER=<postgres-user> \
KUMA_STORE_POSTGRES_PASSWORD=<postgres-password> \
KUMA_STORE_POSTGRES_DB_NAME=<postgres-db-name> \
KUMA_MULTIZONE_ZONE_GLOBAL_ADDRESS=grpcs://<global-kds-address>:5685 \
kuma-cp run
where KUMA_MULTIZONE_ZONE_NAME
is the same value for all zone control planes in the same zone.
KUMA_MULTIZONE_ZONE_KDS_TLS_SKIP_VERIFY
is required because the default global control plane’s certificate is self-signed.
It’s recommended to use a certificate signed by a trusted CA in production. See Secure access across services page for more information.
-
Generate the zone proxy token:
To register the zone ingress and zone egress with the zone control plane, we need to generate a token first
kumactl generate zone-token --zone=<zone-name> --scope egress --scope ingress > /tmp/zone-token
You can also generate the token with the REST API.
Alternatively, you could generate separate tokens for ingress and egress.
-
Create an ingress
data plane proxy configuration to allow kuma-cp
services to be exposed for cross-zone communication:
echo "type: ZoneIngress
name: ingress-01
networking:
address: 127.0.0.1 # address that is routable within the zone
port: 10000
advertisedAddress: 10.0.0.1 # an address which other zones can use to consume this zone-ingress
advertisedPort: 10000 # a port which other zones can use to consume this zone-ingress" > ingress-dp.yaml
-
Apply the ingress config, passing the IP address of the zone control plane to cp-address
:
kuma-dp run \
--proxy-type=ingress \
--cp-address=https://<kuma-cp-address>:5678 \
--dataplane-token-file=/tmp/zone-token \
--dataplane-file=ingress-dp.yaml
If zone-ingress is running on a different machine than zone-cp you need to
copy CA cert file from zone-cp (located in ~/.kuma/kuma-cp.crt
) to somewhere accessible by zone-ingress (e.g. /tmp/kuma-cp.crt
).
Modify the above command and provide the certificate path in --ca-cert-file
argument.
kuma-dp run \
--proxy-type=ingress \
--cp-address=https://<kuma-cp-address>:5678 \
--dataplane-token-file=/tmp/zone-token \
--ca-cert-file=/tmp/kuma-cp.crt \
--dataplane-file=ingress-dp.yaml
-
Optional: if you want to deploy zone egress
Create a ZoneEgress
data plane proxy configuration to allow kuma-cp
services
to be configured to proxy traffic to other zones or external services through
zone egress:
echo "type: ZoneEgress
name: zoneegress-01
networking:
address: 127.0.0.1 # address that is routable within the zone
port: 10002" > zoneegress-dataplane.yaml
-
Apply the egress config, passing the IP address of the zone control plane to cp-address
:
kuma-dp run \
--proxy-type=egress \
--cp-address=https://<kuma-cp-address>:5678 \
--dataplane-token-file=/tmp/zone-token \
--dataplane-file=zoneegress-dataplane.yaml
Verify control plane connectivity
If your global control plane runs on Kubernetes, you’ll need to configure your kumactl
like so:
# forward traffic from local pc into global control plane in the cluster
kubectl -n kong-mesh-system port-forward svc/kong-mesh-control-plane 5681:5681 &
# configure control plane for kumactl
kumactl config control-planes add \
--name global-control-plane \
--address http://localhost:5681 \
--skip-verify
You can run kumactl get zones
, or check the list of zones in the web UI for the global control plane, to verify zone control plane connections.
When a zone control plane connects to the global control plane, the Zone
resource is created automatically in the global control plane.
The Zone Ingress tab of the web UI also lists zone control planes that you
deployed with zone ingress.
Ensure mTLS is enabled on the multi-zone meshes
mTLS is mandatory to enable cross-zone service communication.
mTLS can be configured in your mesh configuration as indicated in the mTLS section.
This is required because Kong Mesh uses the Server Name Indication field, part of the TLS protocol, as a way to pass routing information cross zones.
Cross-zone communication details
For this example we will assume we have a service running in a Kubernetes zone exposing a kuma.io/service
with value echo-server_echo-example_svc_1010
.
The following examples are running in the remote zone trying to access the previously mentioned service.
To view the list of service names available, run:
kubectl get serviceinsight all-services-default -oyaml
apiVersion: kuma.io/v1alpha1
kind: ServiceInsight
mesh: default
metadata:
name: all-services-default
spec:
services:
echo-server_echo-example_svc_1010:
dataplanes:
online: 1
total: 1
issuedBackends:
ca-1: 1
status: online
The following are some examples of different ways to address echo-server
in the
echo-example
Namespace
in a multi-zone mesh.
To send a request in the same zone, you can rely on Kubernetes DNS and use the usual Kubernetes hostnames and ports:
curl http://echo-server:1010
Requests are distributed round-robin between zones.
You can use locality-aware load balancing to keep requests in the same zone.
To send a request to any zone, you can use the generated kuma.io/service
and Kong Mesh DNS:
curl http://echo-server_echo-example_svc_1010.mesh:80
Kong Mesh DNS also supports RFC 1123 compatible names, where underscores are replaced with dots:
curl http://echo-server.echo-example.svc.1010.mesh:80
SERVICE STATUS DATAPLANES
echo-service_echo-example_svc_1010 Online 1/1
To consume the service in a Universal deployment without transparent proxy add the following outbound to your dataplane configuration:
outbound:
- port: 20012
tags:
kuma.io/service: echo-server_echo-example_svc_1010
From the data plane running you will now be able to reach the service using localhost:20012
.
Alternatively, if you configure transparent proxy you can just call echo-server_echo-example_svc_1010.mesh
without defining an outbound
section.
For security reasons it’s not possible to customize the kuma.io/service
in Kubernetes.
If you want to have the same service running on both Universal and Kubernetes make sure to align the Universal’s data plane inbound to have the same kuma.io/service
as the one in Kubernetes or leverage TrafficRoute.
Delete a zone
To delete a Zone
we must first shut down the corresponding Kong Mesh zone control plane instances. As long as the Zone CP is running this will not be possible, and Kong Mesh returns a validation error like:
zone: unable to delete Zone, Zone CP is still connected, please shut it down first
When the Zone CP is fully disconnected and shut down, then the Zone
can be deleted. All corresponding resources (like Dataplane
and DataplaneInsight
) will be deleted automatically as well.
kubectl delete zone zone-1
kumactl delete zone zone-1
Disable a zone
Change the enabled
property value to false
in the global control plane:
apiVersion: kuma.io/v1alpha1
kind: Zone
metadata:
name: zone-1
spec:
enabled: false
type: Zone
name: zone-1
spec:
enabled: false
With this setting, the global control plane will stop exchanging configuration with this zone.
As a result, the zone’s ingress from zone-1 will be deleted from other zone and traffic won’t be routed to it anymore.
The zone will show as Offline in the GUI and CLI.