Skip to content
Kong Docs are moving soon! Our docs are migrating to a new home. You'll be automatically redirected to the new site in the future. In the meantime, view this page on the new site!
Kong Logo | Kong Docs Logo
  • Docs
    • Explore the API Specs
      View all API Specs View all API Specs View all API Specs arrow image
    • Documentation
      API Specs
      Kong Gateway
      Lightweight, fast, and flexible cloud-native API gateway
      Kong Konnect
      Single platform for SaaS end-to-end connectivity
      Kong AI Gateway
      Multi-LLM AI Gateway for GenAI infrastructure
      Kong Mesh
      Enterprise service mesh based on Kuma and Envoy
      decK
      Helps manage Kong’s configuration in a declarative fashion
      Kong Ingress Controller
      Works inside a Kubernetes cluster and configures Kong to proxy traffic
      Kong Gateway Operator
      Manage your Kong deployments on Kubernetes using YAML Manifests
      Insomnia
      Collaborative API development platform
  • Plugin Hub
    • Explore the Plugin Hub
      View all plugins View all plugins View all plugins arrow image
    • Functionality View all View all arrow image
      View all plugins
      AI's icon
      AI
      Govern, secure, and control AI traffic with multi-LLM AI Gateway plugins
      Authentication's icon
      Authentication
      Protect your services with an authentication layer
      Security's icon
      Security
      Protect your services with additional security layer
      Traffic Control's icon
      Traffic Control
      Manage, throttle and restrict inbound and outbound API traffic
      Serverless's icon
      Serverless
      Invoke serverless functions in combination with other plugins
      Analytics & Monitoring's icon
      Analytics & Monitoring
      Visualize, inspect and monitor APIs and microservices traffic
      Transformations's icon
      Transformations
      Transform request and responses on the fly on Kong
      Logging's icon
      Logging
      Log request and response data using the best transport for your infrastructure
  • Support
  • Community
  • Kong Academy
Get a Demo Start Free Trial
Kong Mesh
dev
  • Home icon
  • Kong Mesh
  • Networking
  • MeshService
github-edit-pageEdit this page
report-issueReport an issue
  • Kong Gateway
  • Kong Konnect
  • Kong Mesh
  • Kong AI Gateway
  • Plugin Hub
  • decK
  • Kong Ingress Controller
  • Kong Gateway Operator
  • Insomnia
  • Kuma

  • Docs contribution guidelines
  • dev
  • 2.10.x (latest)
  • 2.9.x
  • 2.8.x
  • 2.7.x (LTS)
  • 2.6.x
  • 2.5.x
  • 2.4.x
  • 2.3.x
  • 2.2.x
  • Introduction
    • About service meshes
    • Overview of Kong Mesh
    • How Kong Mesh works
    • Architecture
    • Install
    • Concepts
    • Stages of software availability
    • Version support policy
    • Software Bill of Materials
    • Vulnerability patching process
    • Mesh requirements
    • Release notes
  • Quickstart
    • Deploy Kong Mesh on Kubernetes
    • Deploy Kong Mesh on Universal
  • Kong Mesh in Production
    • Overview
    • Deployment topologies
      • Overview
      • Single-zone deployment
      • Multi-zone deployment
    • Use Kong Mesh
    • Control plane deployment
      • Kong Mesh license
      • Deploy a single-zone control plane
      • Deploy a multi-zone global control plane
      • Zone Ingress
      • Zone Egress
      • Configure zone proxy authentication
      • Control plane configuration reference
      • Systemd
      • Kubernetes
      • kumactl
      • Deploy Kong Mesh in Production with Helm
    • Configuring your Mesh and multi-tenancy
    • Data plane configuration
      • Data plane proxy
      • Configure the data plane on Kubernetes
      • Configure the data plane on Universal
      • Configure the Kong Mesh CNI
      • Configure transparent proxying
      • IPv6 support
    • Secure your deployment
      • Manage secrets
      • Authentication with the API server
      • Authentication with the data plane proxy
      • Configure data plane proxy membership
      • Secure access across services
      • Kong Mesh RBAC
      • FIPS support
    • Kong Mesh user interface
    • Inspect API
      • Matched policies
      • Affected data plane proxies
      • Envoy proxy configuration
    • Upgrades and tuning
      • Upgrade Kong Mesh
      • Performance fine-tuning
      • Version specific upgrade notes
    • Control Plane Configuration
      • Modifying the configuration
      • Inspecting the configuration
      • Store
  • Using Kong Mesh
    • Zero Trust & Application Security
      • Mutual TLS
      • External Service
    • Resiliency & Failover
      • Dataplane Health
      • Service Health Probes
    • Managing incoming traffic with gateways
      • How ingress works in Kuma
      • Delegated gateways
      • Built-in gateways
      • Running built-in gateway pods on Kubernetes
      • Configuring built-in listeners
      • Configuring built-in routes
      • Using the Kubernetes Gateway API
    • Observability
      • Demo setup
      • Control plane metrics
      • Configuring Prometheus
      • Configuring Grafana
      • Configuring Datadog
      • Observability in multi-zone
    • Route & Traffic shaping
      • Protocol support in Kong Mesh
    • Service Discovery & Networking
      • Service Discovery
      • MeshService
      • MeshMultiZoneService
      • HostnameGenerator
      • DNS
      • Non-mesh traffic
      • MeshExternalService
      • Transparent Proxying
  • Policies
    • Introduction
      • What is a policy?
      • What do policies look like?
      • Writing a targetRef
      • Merging configuration
      • Using policies with MeshService
      • Examples
      • Applying policies in shadow mode
    • MeshAccessLog
      • TargetRef support matrix
      • Configuration
      • Examples
    • MeshCircuitBreaker
      • TargetRef support matrix
      • Configuration
      • Examples
    • MeshFaultInjection
      • TargetRef support matrix
      • Configuration
      • Examples
    • MeshHealthCheck
      • TargetRef support matrix
      • Configuration
      • Examples
    • MeshHTTPRoute
      • TargetRef support matrix
      • Configuration
      • Examples
      • Merging
    • MeshLoadBalancingStrategy
      • TargetRef support matrix
      • Configuration
      • Examples
    • MeshMetric
      • TargetRef support matrix
      • Configuration
      • Prometheus
      • OpenTelemetry
      • Examples
    • MeshPassthrough
      • TargetRef support matrix
      • Configuration
      • Examples
    • MeshProxyPatch
      • TargetRef support matrix
      • Configuration
      • Examples
      • Merging
    • MeshRateLimit
      • TargetRef support matrix
      • Configuration
      • Examples
    • MeshRetry
      • TargetRef support matrix
      • Configuration
      • Examples
    • MeshTCPRoute
      • TargetRef support matrix
      • Configuration
      • Examples
      • Route policies with different types targeting the same destination
    • MeshTimeout
      • TargetRef support matrix
      • Configuration
      • Examples
    • MeshTLS
      • TargetRef support matrix
      • Configuration
      • Examples
    • MeshTrace
      • TargetRef support matrix
      • Configuration
      • Examples
    • MeshTrafficPermission
      • TargetRef support matrix
      • Configuration
      • Examples
    • MeshOPA
    • MeshGlobalRateLimit (beta)
    • Previous Policies
      • General notes about Kong Mesh policies
      • How Kong Mesh chooses the right policy to apply
      • Traffic Permissions
      • Traffic Route
      • Traffic Metrics
      • Traffic Trace
      • Traffic Log
      • Locality-aware Load Balancing
      • Fault Injection
      • Health Check
      • Circuit Breaker
      • Retry
      • Timeout
      • Rate Limit
      • Virtual Outbound
      • MeshGatewayRoute
      • OPA policy
  • Guides
    • Federate zone control plane
    • Add a builtin Gateway
    • Add Kong as a delegated Gateway
    • Kubernetes Gateway API
    • Collect Metrics with OpenTelemetry
    • Migration to the new policies
    • Progressively rolling in strict mTLS
    • Producer and consumer policies
    • Configuring inbound traffic with Rules API
    • Upgrading Transparent Proxy
  • Enterprise Features
    • Overview
    • HashiCorp Vault CA
    • Amazon ACM Private CA
    • cert-manager Private CA
    • OPA policy support
    • MeshOPA
    • Multi-zone authentication
    • FIPS support
    • Certificate Authority rotation
    • Role-Based Access Control
    • Red Hat
      • UBI Images
      • Red Hat OpenShift Quickstart
    • Windows Support
    • ECS Support
    • Auditing
    • MeshGlobalRateLimit (beta)
    • Verify signatures for signed Kong Mesh images
    • Build provenance
      • Verify build provenance for signed Kong Mesh images
      • Verify build provenance for signed Kong Mesh binaries
  • Reference
    • HTTP API
    • Kubernetes annotations and labels
    • Kuma data collection
    • Control plane configuration reference
    • Envoy proxy template
  • Community
    • Contribute to Kuma
enterprise-switcher-icon Switch to OSS
On this pageOn this page
  • Zone types
    • Kubernetes
    • Universal
  • Hostnames
  • Ports
  • Multizone
  • Targeting
    • Policy targetRef
    • Route .backendRefs
    • Labels
  • Migration
    • Options
    • Steps
You are browsing unreleased documentation. See the latest documentation here.

MeshService

MeshService is a new resource that represents what was previously expressed by the Dataplane tag kuma.io/service. Kubernetes users should think about it as the analog of a Kubernetes Service.

A basic example follows to illustrate the structure:

Kubernetes
Universal
Terraform
apiVersion: kuma.io/v1alpha1
kind: MeshService
metadata:
  name: redis
  namespace: kuma-demo
  labels:
    team: db-operators
    kuma.io/mesh: default
spec:
  selector:
    dataplaneTags:
      app: redis
      k8s.kuma.io/namespace: kuma-demo
  ports:
  - port: 6739
    targetPort: 6739
    appProtocol: tcp
  - name: some-port
    port: 16739
    targetPort: target-port-from-container
    appProtocol: tcp
type: MeshService
name: redis
mesh: default
labels:
  team: db-operators
spec:
  selector:
    dataplaneTags:
      app: redis
      k8s.kuma.io/namespace: kuma-demo
  ports:
  - port: 6739
    targetPort: 6739
    appProtocol: tcp
  - name: some-port
    port: 16739
    targetPort: target-port-from-container
    appProtocol: tcp
status:
  addresses:
  - hostname: redis.mesh
    origin: HostnameGenerator
    hostnameGeneratorRef:
      coreName: kmy-hostname-generator
  vips:
  - ip: 10.0.1.1
Please adjust konnect_mesh_control_plane.my_meshcontrolplane.id and konnect_mesh.my_mesh.name according to your current configuration
resource "konnect_mesh_service" "redis" {
  provider = konnect-beta
  type = "MeshService"
  name = "redis"
  labels = {
    team = "db-operators"
  }
  spec = {
    selector = {
      dataplane_tags = {
        app = "redis"
        k8s.kuma.io/namespace = "kuma-demo"
      }
    }
    ports = [
      {
        port = "6739"
        target_port = "6739"
        app_protocol = "tcp"
      },
      {
        name = "some-port"
        port = "16739"
        target_port = "target-port-from-container"
        app_protocol = "tcp"
      }
    ]
  }
  status = {
    addresses = [
      {
        hostname = "redis.mesh"
        origin = "HostnameGenerator"
        hostname_generator_ref = {
          core_name = "kmy-hostname-generator"
        }
      }
    ]
    vips = [
      {
        ip = "10.0.1.1"
      }
    ]
  }
  labels   = {
    "kuma.io/mesh" = konnect_mesh.my_mesh.name
  }
  cp_id    = konnect_mesh_control_plane.my_meshcontrolplane.id
  mesh     = konnect_mesh.my_mesh.name
}

The MeshService represents a destination for traffic from elsewhere in the mesh. It defines which Dataplane objects serve this traffic as well as what ports are available. It also holds information about which IPs and hostnames can be used to reach this destination.

Zone types

How users interact with MeshServices will depend on the type of zone.

In both cases, the resource is generated automatically.

Kubernetes

On Kubernetes, Service already provides a number of the features provided by MeshService. For this reason, Kuma generates MeshServices from Services and:

  • reuses VIPs in the form of cluster IPs
  • uses Kubernetes DNS names

In the vast majority of cases, Kubernetes users do not create MeshServices.

Universal

In universal zones, MeshServices are generated based on the kuma.io/service value of the Dataplane inbounds. The name of the generated MeshService is derived from the value of the kuma.io/service tag and it has one port that corresponds to the given inbound. If the inbound doesn’t have a name, one is generated from the port value.

The only restriction in this case is that the port numbers match. For example an inbound:

      inbound:
      - name: main
        port: 80
        tags:
          kuma.io/service: test-server

would result in a MeshService:

type: MeshService
name: test-server
spec:
  ports:
  - port: 80
    targetPort: 80
    name: main
  selector:
    dataplaneTags:
      kuma.io/service: test-server

but you can’t also have on a different Dataplane:

      inbound:
      - name: main
        port: 8080
        tags:
          kuma.io/service: test-server

since there’s no way to create a coherent MeshService for test-server from these two inbounds.

Hostnames

Because of various shortcomings, the existing VirtualOutbound does not work with MeshService and is planned for phasing out. A new HostnameGenerator resource was introduced to manage hostnames for MeshServices.

Ports

The ports field lists the ports exposed by the Dataplanes that the MeshService matches. targetPort can refer to a port directly or by the name of the Dataplane port.

  ports:
  - name: redis-non-tls
    port: 16739
    targetPort: 6739
    appProtocol: tcp

Multizone

The main difference at the data plane level between kuma.io/service and MeshService is that traffic to a MeshService always goes to some particular zone. It may be the local zone or it may be a remote zone.

With kuma.io/service, this behavior depends on localityAwareLoadBalancing. If this is not enabled, traffic is load balanced equally between zones. If it is enabled, destinations in the local zone are prioritized.

So when moving to MeshService, the choice needs to be made between:

  • keeping this behavior, which means moving to MeshMultiZoneService.
  • using MeshService instead, either from the local zone or one synced from a remote zone.

This is noted in the migration outline.

Targeting

Policy targetRef

A MeshService resource can be used as the destination target of a policy by putting it in a to[].targetRef entry. For example:

spec:
  to:
  - targetRef:
      kind: MeshService
      name: test-server
      namespace: test-app
      sectionName: main

This would target the policy to requests to the given MeshService and port with the name main. Only Kubernetes zones can reference using namespace, which always selects resources in the local zone.

Route .backendRefs

In order to direct traffic to a given MeshService, it must be used as a backendRefs entry. In backendRefs, ports are optionally referred to by their number:

spec:
  targetRef:
    kind: Mesh
  to:
    - targetRef:
        kind: MeshService
        name: test-server
        namespace: test-app
      rules:
        - matches:
            - path:
                type: PathPrefix
                value: /v2
          default:
            backendRefs:
              - kind: MeshService
                name: test-server-v2
                namespace: test-app
                port: 80

As opposed to targetRef, in backendRefs port can be omitted.

Labels

In order to select MeshServices from other zones as well as multiple MeshServices, you must set labels. Note that with backendRefs only one resource is allowed to be selected.

If this field is set, resources are selected via their labels.

- kind: MeshService
  labels:
    kuma.io/display-name: test-server-v2
    k8s.kuma.io/namespace: test-app
    kuma.io/zone: east

In this case, the entry selects any resource with the display name test-server-v2 from the east zone in the test-app namespace. Only one resource will be selected.

But if we leave out the namespace, any resource named test-server-v2 in the east zone is selected, regardless of its namespace.

- kind: MeshService
  labels:
    kuma.io/display-name: test-server-v2
    kuma.io/zone: east

Migration

MeshService is opt-in and involves a migration process. Every Mesh must enable MeshServices in some form:

spec:
  meshServices:
    mode: Disabled # or Everywhere, ReachableBackends, Exclusive

The biggest change with MeshService is that traffic is no longer load-balanced between all zones. Traffic sent to a MeshService is only ever sent to a single zone.

The goal of migration is to stop using kuma.io/service entirely and instead use MeshService resources as destinations and as targetRef in policies and backendRef in routes.

After enabling MeshServices, the control plane generates additional resources. There are a few ways to manage this.

Options

Everywhere

This enables MeshService resource generation everywhere. Both kuma.io/service and MeshService are used to generate the Envoy resources Envoy Clusters and ClusterLoadAssignments. So having both enabled means roughly twice as many resources which in turn means potentially hitting the resource limits of the control plane and memory usage in the data plane, before reachable backends would otherwise be necessary. Therefore, consider trying ReachableBackends as described below.

ReachableBackends

This enables automatic generation of the Kuma MeshServices resource but does not include the corresponding resources for every data plane proxy. The intention is for users to explicitly and gradually introduce relevant MeshServices via reachableBackends.

Exclusive

This is the end goal of the migration. Destinations in the mesh are managed solely with MeshService resources and no longer via kuma.io/service tags and Dataplane inbounds. In the future this will become the default.

Steps

  1. Decide whether you want to set mode: Everywhere or whether you enable MeshService consumer by consumer with mode: ReachableBackends.
  2. For every usage of a kuma.io/service, decide how it should be consumed:
    • as MeshService: only ever from one single zone
      • these are created automatically
    • as MeshMultiZoneService: combined with all “same” services in other zones
      • these have to be created manually
  3. Update your MeshHTTPRoutes/MeshTCPRoutes to refer to MeshService/MeshMultiZoneService directly.
    • this is required
  4. Set mode: Exclusive to stop receiving configuration based on kuma.io/service.
  5. Update targetRef.kind: MeshService references to use the real name of the MeshService as opposed to the kuma.io/service.
    • this is not strictly required
Thank you for your feedback.
Was this page useful?
Too much on your plate? close cta icon
More features, less infrastructure with Kong Konnect. 1M requests per month for free.
Try it for Free
  • Kong
    Powering the API world

    Increase developer productivity, security, and performance at scale with the unified platform for API management, service mesh, and ingress controller.

    • Products
      • Kong Konnect
      • Kong Gateway Enterprise
      • Kong Gateway
      • Kong Mesh
      • Kong Ingress Controller
      • Kong Insomnia
      • Product Updates
      • Get Started
    • Documentation
      • Kong Konnect Docs
      • Kong Gateway Docs
      • Kong Mesh Docs
      • Kong Insomnia Docs
      • Kong Konnect Plugin Hub
    • Open Source
      • Kong Gateway
      • Kuma
      • Insomnia
      • Kong Community
    • Company
      • About Kong
      • Customers
      • Careers
      • Press
      • Events
      • Contact
  • Terms• Privacy• Trust and Compliance
© Kong Inc. 2025