Estimated reading time:
This is the documentation for the Kong Enterprise 2.1.x beta.
To see the documentation for the latest stable version of Kong Enterprise, go to 1.5.x.
Kong for Kubernetes Enterprise is a deployment of Kong Gateway (Enterprise) onto Kubernetes as an ingress controller. A Kubernetes ingress controller is a proxy that exposes Kubernetes services from applications (e.g., Deployments, StatefulSets) running on a Kubernetes cluster to client applications running outside of the cluster. The intent of an ingress controller is to provide a single point of control for all incoming traffic into the Kubernetes cluster.
For example, here’s a common use case: an application deployed to Kubernetes exposes an API that needs to be used by Web or mobile-client applications or services in another cluster. It uses a Kubernetes ingress controller, which can secure and manage traffic according to various policies that can be changed on the fly based on the use case and application.
Here are some benefits of using Kong for Kubernetes Enterprise:
- It stores all of the configuration in the Kubernetes datastore (etcd) using Custom Resource Definitions (CRDs), meaning you can use Kubernetes’ native tools to manage Kong and benefit from Kubernetes’ declarative configuration, RBAC, namespacing, and scalability.
- Because the configuration is stored in Kubernetes, no database needs to be deployed for Kong. Kong runs in DB-less mode, making it operationally easy to run, upgrade, and back up.
- It natively integrates with the Cloud Native Computing Foundation (CNCF) ecosystem to provide out of the box monitoring, logging, certificate management, tracing, and scaling.
Alternatively, you can also deploy Kong for Kubernetes Enterprise with a database to fully utilize features such as Kong Manager, Kong Developer Portal, and others. For a comparison of the options, see Deployment Options.
For more information about the architecture, see Kong Ingress Controller Design.