Skip to content
Kong Logo | Kong Docs Logo
  • Docs
    • Explore the API Specs
      View all API Specs View all API Specs View all API Specs arrow image
    • Documentation
      API Specs
      Kong Gateway
      Lightweight, fast, and flexible cloud-native API gateway
      Kong Konnect
      Single platform for SaaS end-to-end connectivity
      Kong AI Gateway
      Multi-LLM AI Gateway for GenAI infrastructure
      Kong Mesh
      Enterprise service mesh based on Kuma and Envoy
      decK
      Helps manage Kong’s configuration in a declarative fashion
      Kong Ingress Controller
      Works inside a Kubernetes cluster and configures Kong to proxy traffic
      Kong Gateway Operator
      Manage your Kong deployments on Kubernetes using YAML Manifests
      Insomnia
      Collaborative API development platform
  • Plugin Hub
    • Explore the Plugin Hub
      View all plugins View all plugins View all plugins arrow image
    • Functionality View all View all arrow image
      View all plugins
      AI's icon
      AI
      Govern, secure, and control AI traffic with multi-LLM AI Gateway plugins
      Authentication's icon
      Authentication
      Protect your services with an authentication layer
      Security's icon
      Security
      Protect your services with additional security layer
      Traffic Control's icon
      Traffic Control
      Manage, throttle and restrict inbound and outbound API traffic
      Serverless's icon
      Serverless
      Invoke serverless functions in combination with other plugins
      Analytics & Monitoring's icon
      Analytics & Monitoring
      Visualize, inspect and monitor APIs and microservices traffic
      Transformations's icon
      Transformations
      Transform request and responses on the fly on Kong
      Logging's icon
      Logging
      Log request and response data using the best transport for your infrastructure
  • Support
  • Community
  • Kong Academy
Get a Demo Start Free Trial
Kong Mesh
2.3.x
  • Home icon
  • Kong Mesh
  • Production
  • Cp Deployment
  • Deploy a multi-zone global control plane
github-edit-pageEdit this page
report-issueReport an issue
  • Kong Gateway
  • Kong Konnect
  • Kong Mesh
  • Kong AI Gateway
  • Plugin Hub
  • decK
  • Kong Ingress Controller
  • Kong Gateway Operator
  • Insomnia
  • Kuma

  • Docs contribution guidelines
  • dev
  • 2.10.x (latest)
  • 2.9.x
  • 2.8.x
  • 2.7.x (LTS)
  • 2.6.x
  • 2.5.x
  • 2.4.x
  • 2.3.x
  • 2.2.x
  • Introduction
    • About service meshes
    • Overview of Kong Mesh
    • How Kong Mesh works
    • Architecture
    • Stages of software availability
    • Version support policy
    • Mesh requirements
    • Release notes
  • Getting Started
  • Kong Mesh in Production
    • Overview
    • Deployment topologies
      • Overview
      • Standalone deployment
      • Multi-zone deployment
    • Install kumactl
    • Use Kong Mesh
    • Control plane deployment
      • Kong Mesh license
      • Deploy a standalone control plane
      • Deploy a multi-zone global control plane
      • Zone Ingress
      • Zone Egress
      • Configure zone proxy authentication
      • Control plane configuration reference
      • Systemd
    • Create multiple service meshes in a cluster
    • Data plane configuration
      • Data plane proxy
      • Configure the data plane on Kubernetes
      • Configure the data plane on Universal
      • Configure the Kong Mesh CNI
      • Configure transparent proxying
      • IPv6 support
    • Secure your deployment
      • Manage secrets
      • Authentication with the API server
      • Authentication with the data plane proxy
      • Configure data plane proxy membership
      • Secure access across services
      • Kong Mesh RBAC
      • FIPS support
    • Kong Mesh user interface
    • Upgrades and tuning
      • Upgrade Kong Mesh
      • Performance fine-tuning
  • Deploy
    • Explore Kong Mesh with the Kubernetes demo app
    • Explore Kong Mesh with the Universal demo app
  • Explore
    • Gateway
      • Delegated
      • Builtin
    • CLI
      • kumactl
    • Observability
      • Demo setup
      • Control plane metrics
      • Configuring Prometheus
      • Configuring Grafana
      • Configuring Datadog
      • Observability in multi-zone
    • Inspect API
      • Matched policies
      • Affected data plane proxies
      • Envoy proxy configuration
    • Kubernetes Gateway API
      • Installation
      • Gateways
      • TLS termination
      • Customization
      • Multi-mesh
      • Multi-zone
      • GAMMA
      • How it works
  • Networking
    • Service Discovery
    • DNS
      • How it works
      • Installation
      • Configuration
      • Usage
    • Non-mesh traffic
      • Incoming
      • Outgoing
    • Transparent Proxying
  • Monitor & manage
    • Dataplane Health
      • Circuit Breaker Policy
      • Kubernetes and Universal Service Probes
      • Health Check Policy
    • Control Plane Configuration
      • Modifying the configuration
      • Inspecting the configuration
      • Store
  • Policies
    • Introduction
    • General notes about Kong Mesh policies
    • Applying Policies
    • How Kong Mesh chooses the right policy to apply
    • Understanding TargetRef policies
    • Protocol support in Kong Mesh
    • Mutual TLS
      • Usage of "builtin" CA
      • Usage of "provided" CA
      • Permissive mTLS
      • Certificate Rotation
    • Traffic Permissions
      • Usage
      • Access to External Services
    • Traffic Route
      • Usage
    • Traffic Metrics
      • Expose metrics from data plane proxies
      • Expose metrics from applications
      • Override Prometheus settings per data plane proxy
      • Filter Envoy metrics
      • Secure data plane proxy metrics
    • Traffic Trace
      • Add a tracing backend to the mesh
      • Add TrafficTrace resource
    • Traffic Log
      • Add a logging backend
      • Add a TrafficLog resource
      • Logging external services
      • Builtin Gateway support
      • Access Log Format
    • Locality-aware Load Balancing
      • Enabling locality-aware load balancing
    • Fault Injection
      • Usage
      • Matching
    • Health Check
      • Usage
      • Matching
    • Circuit Breaker
      • Usage
      • Matching
      • Builtin Gateway support
      • Non-mesh traffic
    • External Service
      • Usage
      • Builtin Gateway support
    • Retry
      • Usage
      • Matching
      • Builtin Gateway support
    • Timeout
      • Usage
      • Configuration
      • Default general-purpose Timeout policy
      • Matching
      • Builtin Gateway support
      • Inbound timeouts
      • Non-mesh traffic
    • Rate Limit
      • Usage
      • Matching destinations
      • Builtin Gateway support
    • Virtual Outbound
      • Examples
    • MeshGateway
      • TLS Termination
    • MeshGatewayRoute
      • Listener tags
      • Matching
      • Filters
      • Reference
    • MeshGatewayInstance
    • Service Health Probes
      • Kubernetes
      • Universal probes
    • MeshAccessLog (Beta)
      • TargetRef support matrix
      • Configuration
      • Examples
    • MeshCircuitBreaker (Beta)
      • TargetRef support matrix
      • Configuration
      • Examples
    • MeshFaultInjection (Beta)
      • TargetRef support matrix
      • Configuration
      • Examples
    • MeshHealthCheck (Beta)
      • TargetRef support matrix
      • Configuration
      • Examples
    • MeshHTTPRoute (Beta)
      • TargetRef support matrix
      • Configuration
      • Examples
      • Merging
    • MeshProxyPatch (Beta)
      • TargetRef support matrix
      • Configuration
      • Examples
      • Merging
    • MeshRateLimit (Beta)
      • TargetRef support matrix
      • Configuration
      • Examples
    • MeshRetry (Beta)
      • TargetRef support matrix
      • Configuration
      • Examples
    • MeshTCPRoute (Beta)
      • TargetRef support matrix
      • Configuration
      • Examples
      • Route policies with different types targeting the same destination
    • MeshTimeout (Beta)
      • TargetRef support matrix
      • Configuration
      • Examples
    • MeshTrace (Beta)
      • TargetRef support matrix
      • Configuration
      • Examples
    • MeshTrafficPermission (Beta)
      • TargetRef support matrix
      • Configuration
      • Examples
    • MeshLoadBalancingStrategy (Beta)
      • TargetRef support matrix
      • Configuration
      • Examples
    • OPA policy
    • MeshOPA (beta)
    • MeshGlobalRateLimit (beta)
  • Enterprise Features
    • Overview
    • HashiCorp Vault CA
    • Amazon ACM Private CA
    • cert-manager Private CA
    • OPA policy support
    • MeshOPA (beta)
    • Multi-zone authentication
    • FIPS support
    • Certificate Authority rotation
    • Role-Based Access Control
    • UBI Images
    • Windows Support
    • ECS Support
    • Auditing
    • MeshGlobalRateLimit (beta)
  • Reference
    • HTTP API
    • Kubernetes annotations and labels
    • Kuma data collection
    • Control plane configuration reference
    • Envoy proxy template
  • Community
    • Contribute to Kuma
enterprise-switcher-icon Switch to OSS
On this pageOn this page
  • Prerequisites
  • Usage
    • Set up the global control plane
    • Set up the zone control planes
    • Verify control plane connectivity
    • Ensure mTLS is enabled on the multi-zone meshes
    • Cross-zone communication details
  • Delete a zone
  • Disable a zone
You are browsing documentation for an older version. See the latest documentation here.

Deploy a multi-zone global control plane

Prerequisites

To set up a multi-zone deployment we will need to:

  • Set up the global control plane
  • Set up the zone control planes
  • Verify control plane connectivity
  • Ensure mTLS is enabled for the multi-zone meshes

Usage

Set up the global control plane

The global control plane must run on a dedicated cluster (unless using “Universal on Kubernetes” mode), and cannot be assigned to a zone.

Kubernetes
global-control-plane Universal on Kubernetes using Helm
Universal

The global control plane on Kubernetes must reside on its own Kubernetes cluster, to keep its resources separate from the resources the zone control planes create during synchronization.

Run:

kumactl
Helm
kumactl install control-plane \
  --set "kuma.controlPlane.mode=global" \
  | kubectl apply -f -
# Before installing Kong Mesh with Helm, configure your local Helm repository:
# https://docs.konghq.com/mesh/2.3.x/production/cp-deployment/kubernetes/#helm
helm install \
  --create-namespace \
  --namespace kong-mesh-system \
  --set "kuma.controlPlane.mode=global" \
  kong-mesh kong-mesh/kong-mesh

Find the external IP and port of the kong-mesh-global-zone-sync service in the kong-mesh-system namespace:

kubectl get services -n kong-mesh-system
NAMESPACE     NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                                                                  AGE
kong-mesh-system   kong-mesh-global-zone-sync     LoadBalancer   10.105.9.10     35.226.196.103   5685:30685/TCP                                                           89s
kong-mesh-system   kong-mesh-control-plane     ClusterIP      10.105.12.133   <none>           5681/TCP,443/TCP,5676/TCP,5677/TCP,5678/TCP,5679/TCP,5682/TCP,5653/UDP   90s

By default, it’s exposed on port 5685. In this example the value is 35.226.196.103:5685. You pass this as the value of <global-kds-address> when you set up the zone control planes.

Running global control plane in “Universal on Kubernetes” mode means using PostgreSQL as storage instead of Kubernetes. It means that failover / HA / reliability characteristics will change. Please read Kubernetes and PostgreSQL docs for more details.

Before using Kong Mesh with helm, please follow these steps to configure your local helm repo and learn the reference helm configuration values.yaml.

  1. Define Kubernetes secrets with database sensitive information

    apiVersion: v1
    kind: Secret
    metadata:
      name: your-secret-name
    type: Opaque
    data:
      POSTGRES_DB: ...
      POSTGRES_HOST_RW: ...
      POSTGRES_USER: ...
      POSTGRES_PASSWORD: ...
    
  2. Create a values.yaml file with: kuma.controlPlane.environment=universal and kuma.controlPlane.mode=global in the chart (values.yaml).

  3. Set kuma.controlPlane.secrets with database sensitive information

    # ...
        secrets:
          postgresDb:
            Secret: your-secret-name
            Key: POSTGRES_DB
            Env: KUMA_STORE_POSTGRES_DB_NAME
          postgresHost:
            Secret: your-secret-name
            Key: POSTGRES_HOST_RW
            Env: KUMA_STORE_POSTGRES_HOST
          postgrestUser:
            Secret: your-secret-name
            Key: POSTGRES_USER
            Env: KUMA_STORE_POSTGRES_USER
          postgresPassword:
            Secret: your-secret-name
            Key: POSTGRES_PASSWORD
            Env: KUMA_STORE_POSTGRES_PASSWORD
    
  4. Optionally set kuma.postgres with TLS settings

    # ...
      # Postgres' settings for universal control plane on k8s
      postgres:
        # -- Postgres port, password should be provided as a secret reference in "controlPlane.secrets"
        # with the Env value "KUMA_STORE_POSTGRES_PASSWORD".
        # Example:
        # controlPlane:
        #   secrets:
        #     - Secret: postgres-postgresql
        #       Key: postgresql-password
        #       Env: KUMA_STORE_POSTGRES_PASSWORD
        port: "5432"
        # TLS settings
        tls:
          # -- Mode of TLS connection. Available values are: "disable", "verifyNone", "verifyCa", "verifyFull"
          mode: disable # ENV: KUMA_STORE_POSTGRES_TLS_MODE
          # -- Whether to disable SNI the postgres `sslsni` option.
          disableSSLSNI: false # ENV: KUMA_STORE_POSTGRES_TLS_DISABLE_SSLSNI
          # -- Secret name that contains the ca.crt
          caSecretName:
          # -- Secret name that contains the client tls.crt, tls.key
          secretName:
    
  5. Run helm install

     helm install kong-mesh \
       --create-namespace \
       --skip-crds \
       --namespace kong-mesh-system \
       --values values.yaml \
       kong-mesh/kong-mesh
    
  6. Find the external IP and port of the kong-mesh-global-zone-sync service in the kong-mesh-system namespace:

     kubectl get services -n kong-mesh-system
    
     NAMESPACE     NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                                                                  AGE
     kong-mesh-system   kong-mesh-global-zone-sync     LoadBalancer   10.105.9.10     35.226.196.103   5685:30685/TCP                                                           89s
     kong-mesh-system   kong-mesh-control-plane     ClusterIP      10.105.12.133   <none>           5681/TCP,443/TCP,5676/TCP,5677/TCP,5678/TCP,5679/TCP,5682/TCP,5653/UDP   90s
    

    In this example the value is 35.226.196.103:5685. You pass this as the value of <global-kds-address> when you set up the zone control planes.

When running the global control plane in Universal mode, a database must be used to persist state for production deployments. Ensure that migrations have been run against the database prior to running the global control plane.

  1. Set up the global control plane, and add the global environment variable:

    KUMA_MODE=global \
    KUMA_ENVIRONMENT=universal \
    KUMA_STORE_TYPE=postgres \
    KUMA_STORE_POSTGRES_HOST=<postgres-host> \
    KUMA_STORE_POSTGRES_PORT=<postgres-port> \
    KUMA_STORE_POSTGRES_USER=<postgres-user> \
    KUMA_STORE_POSTGRES_PASSWORD=<postgres-password> \
    KUMA_STORE_POSTGRES_DB_NAME=<postgres-db-name> \
    kuma-cp run
    

Set up the zone control planes

You need the following values to pass to each zone control plane setup:

  • zone – the zone name. An arbitrary string. This value registers the zone control plane with the global control plane.
  • kds-global-address – the external IP and port of the global control plane.
Kubernetes
Universal

For every individual zone create an own cluster and for each run:

kumactl
Helm
kumactl install control-plane \
  --set "kuma.controlPlane.mode=zone" \
  --set "kuma.controlPlane.zone=<zone-name>" \
  --set "kuma.ingress.enabled=true" \
  --set "kuma.controlPlane.kdsGlobalAddress=grpcs://<global-kds-address>:5685" \
  --set "kuma.controlPlane.tls.kdsZoneClient.skipVerify=true" \
  | kubectl apply -f -
# Before installing Kong Mesh with Helm, configure your local Helm repository:
# https://docs.konghq.com/mesh/2.3.x/production/cp-deployment/kubernetes/#helm
helm install \
  --create-namespace \
  --namespace kong-mesh-system \
  --set "kuma.controlPlane.mode=zone" \
  --set "kuma.controlPlane.zone=<zone-name>" \
  --set "kuma.ingress.enabled=true" \
  --set "kuma.controlPlane.kdsGlobalAddress=grpcs://<global-kds-address>:5685" \
  --set "kuma.controlPlane.tls.kdsZoneClient.skipVerify=true" \
  kong-mesh kong-mesh/kong-mesh

where kuma.controlPlane.zone is the same value for all zone control planes in the same zone.

Add --set kuma.egress.enabled=true to list of arguments if you want to deploy optional Zone Egress.

Set --set kuma.controlPlane.tls.kdsZoneClient.skipVerify=true because the default global control plane’s certificate is self-signed. For production use a certificate signed by a trusted CA. See Secure access across services page for more information.

After installing a zone control plane, make sure to restart the application pods that are already running such that the data plane proxies can be connected.

When running the zone control plane in Universal mode, a database must be used to persist state for production deployments. Ensure that migrations have been run against the database prior to running the zone control plane.

  1. On each zone control plane, run:

     KUMA_MODE=zone \
     KUMA_MULTIZONE_ZONE_NAME=<zone-name> \
     KUMA_ENVIRONMENT=universal \
     KUMA_STORE_TYPE=postgres \
     KUMA_STORE_POSTGRES_HOST=<postgres-host> \
     KUMA_STORE_POSTGRES_PORT=<postgres-port> \
     KUMA_STORE_POSTGRES_USER=<postgres-user> \
     KUMA_STORE_POSTGRES_PASSWORD=<postgres-password> \
     KUMA_STORE_POSTGRES_DB_NAME=<postgres-db-name> \
     KUMA_MULTIZONE_ZONE_GLOBAL_ADDRESS=grpcs://<global-kds-address>:5685 \
     kuma-cp run
    

    where KUMA_MULTIZONE_ZONE_NAME is the same value for all zone control planes in the same zone.

    KUMA_MULTIZONE_ZONE_KDS_TLS_SKIP_VERIFY is required because the default global control plane’s certificate is self-signed. It’s recommended to use a certificate signed by a trusted CA in production. See Secure access across services page for more information.

  2. Generate the zone proxy token:

    To register the zone ingress and zone egress with the zone control plane, we need to generate a token first

    kumactl generate zone-token --zone=<zone-name> --scope egress --scope ingress > /tmp/zone-token
    

    You can also generate the token with the REST API. Alternatively, you could generate separate tokens for ingress and egress.

  3. Create an ingress data plane proxy configuration to allow kuma-cp services to be exposed for cross-zone communication:

    echo "type: ZoneIngress
    name: ingress-01
    networking:
      address: 127.0.0.1 # address that is routable within the zone
      port: 10000
      advertisedAddress: 10.0.0.1 # an address which other zones can use to consume this zone-ingress
      advertisedPort: 10000 # a port which other zones can use to consume this zone-ingress" > ingress-dp.yaml
    
  4. Apply the ingress config, passing the IP address of the zone control plane to cp-address:

    kuma-dp run \
      --proxy-type=ingress \
      --cp-address=https://<kuma-cp-address>:5678 \
      --dataplane-token-file=/tmp/zone-token \
      --dataplane-file=ingress-dp.yaml
    

    If zone-ingress is running on a different machine than zone-cp you need to copy CA cert file from zone-cp (located in ~/.kuma/kuma-cp.crt) to somewhere accessible by zone-ingress (e.g. /tmp/kuma-cp.crt). Modify the above command and provide the certificate path in --ca-cert-file argument.

    kuma-dp run \
      --proxy-type=ingress \
      --cp-address=https://<kuma-cp-address>:5678 \
      --dataplane-token-file=/tmp/zone-token \
      --ca-cert-file=/tmp/kuma-cp.crt \
      --dataplane-file=ingress-dp.yaml
    
  5. Optional: if you want to deploy zone egress

    Create a ZoneEgress data plane proxy configuration to allow kuma-cp services to be configured to proxy traffic to other zones or external services through zone egress:

    echo "type: ZoneEgress
    name: zoneegress-01
    networking:
      address: 127.0.0.1 # address that is routable within the zone
      port: 10002" > zoneegress-dataplane.yaml
    
  6. Apply the egress config, passing the IP address of the zone control plane to cp-address:

     kuma-dp run \
       --proxy-type=egress \
       --cp-address=https://<kuma-cp-address>:5678 \
       --dataplane-token-file=/tmp/zone-token \
       --dataplane-file=zoneegress-dataplane.yaml
    

Verify control plane connectivity

If your global control plane runs on Kubernetes, you’ll need to configure your kumactl like so:

# forward traffic from local pc into global control plane in the cluster
kubectl -n kong-mesh-system port-forward svc/kong-mesh-control-plane 5681:5681 &

# configure control plane for kumactl
kumactl config control-planes add \
  --name global-control-plane \
  --address http://localhost:5681 \
  --skip-verify

You can run kumactl get zones, or check the list of zones in the web UI for the global control plane, to verify zone control plane connections.

When a zone control plane connects to the global control plane, the Zone resource is created automatically in the global control plane.

The Zone Ingress tab of the web UI also lists zone control planes that you deployed with zone ingress.

Ensure mTLS is enabled on the multi-zone meshes

mTLS is mandatory to enable cross-zone service communication. mTLS can be configured in your mesh configuration as indicated in the mTLS section. This is required because Kong Mesh uses the Server Name Indication field, part of the TLS protocol, as a way to pass routing information cross zones.

Cross-zone communication details

For this example we will assume we have a service running in a Kubernetes zone exposing a kuma.io/service with value echo-server_echo-example_svc_1010. The following examples are running in the remote zone trying to access the previously mentioned service.

Kubernetes
Universal

To view the list of service names available, run:

kubectl get serviceinsight all-services-default -oyaml
apiVersion: kuma.io/v1alpha1
kind: ServiceInsight
mesh: default
metadata:
  name: all-services-default
spec:
  services:
    echo-server_echo-example_svc_1010:
      dataplanes:
        online: 1
        total: 1
      issuedBackends:
        ca-1: 1
      status: online

The following are some examples of different ways to address echo-server in the echo-example Namespace in a multi-zone mesh.

To send a request in the same zone, you can rely on Kubernetes DNS and use the usual Kubernetes hostnames and ports:

curl http://echo-server:1010

Requests are distributed round-robin between zones. You can use locality-aware load balancing to keep requests in the same zone.

To send a request to any zone, you can use the generated kuma.io/service and Kong Mesh DNS:

curl http://echo-server_echo-example_svc_1010.mesh:80

Kong Mesh DNS also supports RFC 1123 compatible names, where underscores are replaced with dots:

curl http://echo-server.echo-example.svc.1010.mesh:80
kumactl inspect services
SERVICE                                  STATUS               DATAPLANES
echo-service_echo-example_svc_1010       Online               1/1

To consume the service in a Universal deployment without transparent proxy add the following outbound to your dataplane configuration:

outbound:
  - port: 20012
    tags:
      kuma.io/service: echo-server_echo-example_svc_1010

From the data plane running you will now be able to reach the service using localhost:20012.

Alternatively, if you configure transparent proxy you can just call echo-server_echo-example_svc_1010.mesh without defining an outbound section.

For security reasons it’s not possible to customize the kuma.io/service in Kubernetes.

If you want to have the same service running on both Universal and Kubernetes make sure to align the Universal’s data plane inbound to have the same kuma.io/service as the one in Kubernetes or leverage TrafficRoute.

Delete a zone

To delete a Zone we must first shut down the corresponding Kong Mesh zone control plane instances. As long as the Zone CP is running this will not be possible, and Kong Mesh returns a validation error like:

zone: unable to delete Zone, Zone CP is still connected, please shut it down first

When the Zone CP is fully disconnected and shut down, then the Zone can be deleted. All corresponding resources (like Dataplane and DataplaneInsight) will be deleted automatically as well.

Kubernetes
Universal
kubectl delete zone zone-1
kumactl delete zone zone-1

Disable a zone

Change the enabled property value to false in the global control plane:

Kubernetes
Universal
apiVersion: kuma.io/v1alpha1
kind: Zone
metadata:
  name: zone-1
spec:
  enabled: false
type: Zone
name: zone-1
spec:
  enabled: false

With this setting, the global control plane will stop exchanging configuration with this zone. As a result, the zone’s ingress from zone-1 will be deleted from other zone and traffic won’t be routed to it anymore. The zone will show as Offline in the GUI and CLI.

Thank you for your feedback.
Was this page useful?
Too much on your plate? close cta icon
More features, less infrastructure with Kong Konnect. 1M requests per month for free.
Try it for Free
  • Kong
    Powering the API world

    Increase developer productivity, security, and performance at scale with the unified platform for API management, service mesh, and ingress controller.

    • Products
      • Kong Konnect
      • Kong Gateway Enterprise
      • Kong Gateway
      • Kong Mesh
      • Kong Ingress Controller
      • Kong Insomnia
      • Product Updates
      • Get Started
    • Documentation
      • Kong Konnect Docs
      • Kong Gateway Docs
      • Kong Mesh Docs
      • Kong Insomnia Docs
      • Kong Konnect Plugin Hub
    • Open Source
      • Kong Gateway
      • Kuma
      • Insomnia
      • Kong Community
    • Company
      • About Kong
      • Customers
      • Careers
      • Press
      • Events
      • Contact
  • Terms• Privacy• Trust and Compliance
© Kong Inc. 2025