Skip to content
Kong Docs are moving soon! Our docs are migrating to a new home. You'll be automatically redirected to the new site in the future. In the meantime, view this page on the new site!
Kong Logo | Kong Docs Logo
  • Docs
    • Explore the API Specs
      View all API Specs View all API Specs View all API Specs arrow image
    • Documentation
      API Specs
      Kong Gateway
      Lightweight, fast, and flexible cloud-native API gateway
      Kong Konnect
      Single platform for SaaS end-to-end connectivity
      Kong AI Gateway
      Multi-LLM AI Gateway for GenAI infrastructure
      Kong Mesh
      Enterprise service mesh based on Kuma and Envoy
      decK
      Helps manage Kong’s configuration in a declarative fashion
      Kong Ingress Controller
      Works inside a Kubernetes cluster and configures Kong to proxy traffic
      Kong Gateway Operator
      Manage your Kong deployments on Kubernetes using YAML Manifests
      Insomnia
      Collaborative API development platform
  • Plugin Hub
    • Explore the Plugin Hub
      View all plugins View all plugins View all plugins arrow image
    • Functionality View all View all arrow image
      View all plugins
      AI's icon
      AI
      Govern, secure, and control AI traffic with multi-LLM AI Gateway plugins
      Authentication's icon
      Authentication
      Protect your services with an authentication layer
      Security's icon
      Security
      Protect your services with additional security layer
      Traffic Control's icon
      Traffic Control
      Manage, throttle and restrict inbound and outbound API traffic
      Serverless's icon
      Serverless
      Invoke serverless functions in combination with other plugins
      Analytics & Monitoring's icon
      Analytics & Monitoring
      Visualize, inspect and monitor APIs and microservices traffic
      Transformations's icon
      Transformations
      Transform request and responses on the fly on Kong
      Logging's icon
      Logging
      Log request and response data using the best transport for your infrastructure
  • Support
  • Community
  • Kong Academy
Get a Demo Start Free Trial
Kong Ingress Controller
2.5.x LTS
  • Home icon
  • Kong Ingress Controller
  • Guides
  • Upgrading to 2.1.x and later versions
github-edit-pageEdit this page
report-issueReport an issue
  • Kong Gateway
  • Kong Konnect
  • Kong Mesh
  • Kong AI Gateway
  • Plugin Hub
  • decK
  • Kong Ingress Controller
  • Kong Gateway Operator
  • Insomnia
  • Kuma

  • Docs contribution guidelines
  • unreleased
  • 3.4.x (latest) (LTS)
  • 3.3.x
  • 3.2.x
  • 3.1.x
  • 3.0.x
  • 2.12.x (LTS)
  • 2.11.x
  • 2.10.x
  • 2.9.x
  • 2.8.x
  • 2.7.x
  • 2.6.x
  • 2.5.x (LTS)
  • Introduction
    • FAQ
    • Version Support Policy
    • Stages of Software Availability
    • Changelog
  • Concepts
    • Architecture
    • Custom Resources
    • Deployment Methods
    • Kong for Kubernetes with Kong Gateway Enterprise
    • High-Availability and Scaling
    • Resource Classes
    • Security
    • Ingress Resource API Versions
    • Gateway API
  • Deployment
    • Kong Ingress on Minikube
    • Kong for Kubernetes
    • Kong Enterprise for Kubernetes (DB-less)
    • Kong Enterprise for Kubernetes (DB-backed)
    • Kong Ingress on AKS
    • Kong Ingress on EKS
    • Kong Ingress on GKE
    • Admission Controller
    • Installing Gateway APIs
  • Guides
    • Getting Started with KIC
    • Upgrading from previous versions
    • Getting Started using Istio
    • Using Custom Resources
      • Using the Kong(Cluster)Plugin Resource
      • Using the KongIngress Resource
      • Using KongConsumer and Credential Resources
      • Using the TCPIngress Resource
      • Using the UDPIngress Resource
    • Using the ACL and JWT Plugins
    • Using cert-manager with Kong
    • Configuring a Fallback Service
    • Using an External Service
    • Configuring HTTPS Redirects for Services
    • Using Redis for Rate Limiting
    • Integrate KIC with Prometheus/Grafana
    • Configuring Circuit-Breaker and Health-Checking
    • Setting up a Custom Plugin
    • Using Ingress with gRPC
    • Setting up Upstream mTLS
    • Exposing a TCP-based Service
    • Exposing a UDP-based Service
    • Using the mTLS Auth Plugin
    • Using the OpenID Connect Plugin
    • Rewriting Hosts and Paths
    • Preserving Client IP Address
    • Using Gateway API
    • Using Kong with Knative
  • References
    • KIC Annotations
    • CLI Arguments
    • Custom Resource Definitions
    • Plugin Compatibility
    • Version Compatibility
    • Troubleshooting
    • Prometheus Metrics
    • Feature Gates
    • Gateway API Support
enterprise-switcher-icon Switch to OSS
On this pageOn this page
  • Prerequisites
  • Upgrading from 2.0.x to 2.1.x or later
    • Update CRDs
    • Updating the admission webhook
    • Deprecated leader election flag
  • 1.x to 2.x Breaking changes
    • Flag Changes
    • Logging Differences
    • Independent Controller Toggling
  • Testing environment
  • Upgrade
    • Configure Helm repository
    • Perform the upgrade
    • Rollback
You are browsing documentation for an older version. See the latest documentation here.

Upgrading to 2.1.x and later versions

This guide walks through the steps needed to upgrade to 2.1.x and later versions from earlier versions, and covers changes from older versions to help operators evaluate whether they need to make changes to their configuration. It also covers creation of a testing environment to test the upgrade.

Prerequisites

  • Helm v3 is installed
  • You are familiar with Helm install and upgrade operations. See the documentation for Helm v3.

Note: Deploying and upgrading via the Helm chart is the supported mechanism for production deployments of KIC. If you’re deploying KIC using Kustomize or some other mechanism, you need to develop and test your own upgrade strategy based on the following examples.

Upgrading from 2.0.x to 2.1.x or later

This document covers both requirements for upgrading from 1.x to 2.x and requirements for upgrading to later 2.x versions past 2.0. If you have already upgraded to 2.0.x, only the steps in this subsection are necessary. If you are still on 1.x, follow the subsequent sections to upgrade to 2.0.x first. You must upgrade to 2.0.x before upgrading to 2.1.x or later.

Once you are on 2.0, you can upgrade to later 2.x versions directly as long as you apply the following CRD and webhook updates and remove the deprecated flag. You do not need to upgrade to 2.1 before upgrading to later 2.x versions.

Update CRDs

The previous KongIngress CRD included several incorrectly-named fields which did not apply correctly to Kong upstreams. 2.1.x CRDs fix these fields:

  • healthchecks.passive.unhealthy.timeout is now healthchecks.passive.unhealthy.timeouts
  • healthchecks.active.unhealthy.timeout is now healthchecks.active.unhealthy.timeouts

Before upgrading, review your KongIngresses to see if you use either of the old fields. You will need to manually re-add them with the new field name after upgrading.

Kubernetes does not allow unknown fields in CRDs. Since the old fields don’t exist in 2.1, Kubernetes will strip the old field and its value after updating. There is no automated way to copy the old values into the new field.

Helm does not update CRDs automatically, and 2.1 includes changes to the controller CRDs. You must apply them manually before upgrading:

kubectl apply -f https://raw.githubusercontent.com/Kong/charts/main/charts/kong/crds/custom-resource-definitions.yaml

Updating the admission webhook

2.1 includes changes to the admission webhook to account for changes in the webhook code and to avoid unwanted interaction with Helm Secrets. If you are using Helm, the webhook will update automatically. If not, you should remove your current webhook and follow the admission webhook guide to create a new one.

Deprecated leader election flag

2.1 deprecates the --leader-elect flag (and CONTROLLER_LEADER_ELECT environment variable). Leader election is now set automatically based on the database mode (election is enabled for Postgres-backed instances and disabled for DB-less instances). The flag is still accepted but no longer has any effect, and will be removed in a future release.

1.x to 2.x Breaking changes

Mechanically the helm upgrade is backwards compatible, but the KIC 2.0.x release includes some breaking changes for options and controller operations:

  • Several controller manager flags were removed or changed
  • The format of controller manager logs has changed, and logs are now produced by multiple controllers instead of one
  • The admission webhook now requires clients that support TLS 1.2 or later. See the KIC Changelog for all changes in this release.

Flag Changes

If you don’t have a heavily customized KIC deployment (for example, if you use standard values.yaml options and flags for your Helm deployment of the KIC), then the following flag changes likely have no impact on you.

However, if you previously set custom arguments for the controller with options like ingressController.args, pay careful attention to the following sections and make adjustments to your config.

Removed flags

The following general purpose flags have been removed from the controller manager:

  • --version
  • --alsologtostderr
  • --logtostderr
  • --v
  • --vmodule

Support for deprecated classless ingress types has been removed:

  • --process-classless-ingress-v1beta1
  • --process-classless-ingress-v1
  • --process-classless-kong-consumer

Changed flags

The following Ingress controller toggles have been replaced:

  • --disable-ingress-extensionsv1beta1 has been replaced by --enable-controller-ingress-extensionsv1beta1=false
  • --disable-ingress-networkingv1 has been replaced by --enable-controller-ingress-networkingv1=false
  • --disable-ingress-networkingv1beta1 has been replaced by --enable-controller-ingress-networkingv1beta1=false

If you’re affected by these flag changes, review the Independent Controller Toggling section for more context.

The --dump-config flag is now a boolean:

  • true replaces the old enabled value
  • false replaces the old disabled value
  • true with the additional new --dump-sensitive-config=true flag replaces the old sensitive value

Logging Differences

In versions of the KIC prior to v2.0.0 logging output included a large startup header and the majority of logs were produced by a single logging entity. For example:

-------------------------------------------------------------------------------
Kong Ingress controller
-------------------------------------------------------------------------------

W0825 14:48:18.084560       1 client_config.go:614] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
time="2021-08-25T14:48:18Z" level=info msg="version of kubernetes api-server: 1.21" api-server-host="https://10.96.0.1:443" git_commit=5e58841cce77d4bc13713ad2b91fa0d961e69192 git_tree_state=clean git_version=v1.21.1 major=1 minor=21 platform=linux/amd64
time="2021-08-25T14:48:19Z" level=info msg="kong version: 2.5.0" kong_version=2.5.0
time="2021-08-25T14:48:19Z" level=info msg="datastore strategy for kong: off"
time="2021-08-25T14:48:19Z" level=info msg="chosen Ingress API version: networking.k8s.io/v1"
time="2021-08-25T14:48:55Z" level=info msg="started leading" component=status-syncer
time="2021-08-25T14:48:55Z" level=info msg="syncing configuration" component=controller
time="2021-08-25T14:48:55Z" level=info msg="no configuration change, skipping sync to kong" component=controller

This previous architecture had a single controller that was responsible for all supported resources (for example, Ingress or TCPIngress) and logged for them.

In addition to increasing logging output to help identify problems and conditions during the controller manager runtime, v2.0.0 introduced individual controllers for each supported API type. There is now logging metadata specific to these components. For example:

time="2021-08-25T15:01:37Z" level=info msg="Starting EventSource" logger=controller-runtime.manager.controller.ingress reconciler group=networking.k8s.io reconciler kind=Ingress
time="2021-08-25T15:01:53Z" level=info msg="updating the proxy with new Ingress" NetV1Ingress="{\"Namespace\":\"default\",\"Name\":\"httpbin-ingress-v1beta1\"}" logger=controllers.Ingress.netv1 name=httpbin-ingress-v1beta1 namespace=default
time="2021-08-25T15:01:54Z" level=info msg="successfully synced configuration to kong." subsystem=proxy-cache-resolver

In these example log entries, note the logger=controllers.Ingress.netv1 component. This helps identify specific components so that operators can more easily search for these components when reviewing logs.

If you and your team depend on logging output as a significant component of KIC administration in your organization, we recommend deploying v2.0.x in a non-production environment before upgrading. Set up a testing environment and familiarize yourself with the new logging characteristics.

Independent Controller Toggling

In v2.0.x, KIC separates its single monolithic controller into several independent ones focused on specific APIs.

With the v2.0.x release, these independent controllers can be individually enabled or disabled with the new --enable-controller-{NAME} flags provided for the controller manager.

Auto-negotiation of the Ingress API version (for example, extensions/v1beta1 or networking/v1) has been disabled and you now have to set exactly one option for these specific controllers:

  • --enable-controller-ingress-extensionsv1beta1
  • --enable-controller-ingress-networkingv1
  • --enable-controller-ingress-networkingv1beta1

In most cases, and with versions of Kubernetes greater than v1.19.x, you can use the default value of networking/v1.

See the CLI Arguments Reference for a full list of these new options and their default values.

Testing environment

To avoid issues with the upgrade, run it in a test environment before deploying it to production. Create a Kubernetes cluster using the same tools that deployed your production cluster, or use a local development cluster such as minikube or kind.

Using Helm, check the deployed chart version:

Command
Response
$ helm list -A
NAME               NAMESPACE   STATUS   CHART      APP VERSION
ingress-controller kong-system deployed kong-2.3.0 2.5

In the above example, kong-2.3.0 is the currently deployed chart version.

Using the existing chart version and the values.yaml configuration for your production environment, deploy a copy to your test cluster with the --version flag:

$ helm install kong-upgrade-testing kong/kong \
  --version ${YOUR_VERSION} \
  -f ${PATH_TO_YOUR_VALUES_FILE}

Note: You may need to adjust your chart further to work in a development or staging environment. See the Helm chart documentation. Use this testing environment to walk through the following upgrade steps and ensure there are no problems during the upgrade process. Once you’re satisfied everything is ready, switch to the production cluster and work through the upgrade steps again.

Upgrade

Configure Helm repository

Check the local helm installation to make sure it has the Kong Charts Repository loaded:

Command
Response
helm repo list
NAME    URL
kong    https://charts.konghq.com

If the repository is not present, add it:

$ helm repo add kong https://charts.konghq.com

Update the repository to pull the latest repository updates:

Command
Response
helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "kong" chart repository
Update Complete. ⎈Happy Helming!⎈

Perform the upgrade

Run the following command, specifying the old release name, the namespace where you’ve configured Kong Gateway, and the existing values.yaml configuration file:

$ helm upgrade ${YOUR_RELEASE_NAME} kong/kong \
  --namespace ${YOUR_NAMESPACE} \
  -f ${PATH_TO_YOUR_VALUES_FILE}

After the upgrade completes there is a brief period of time before the new resources are online. You can wait for the relevant Pod resources to complete by watching them in your release namespace:

$ kubectl -n ${YOUR_RELEASE_NAMESPACE} get pods -w

Once the new pods are in a Ready state, the upgrade is complete.

Rollback

If you run into problems during or after the upgrade, Helm provides a rollback mechanism to revert to a previous revision of the release:

$ helm rollback --namespace ${YOUR_RELEASE_NAMESPACE} ${YOUR_RELEASE_NAME}

You can wait for the rollback to complete by watching the relevant Pod resources:

$ kubectl -n ${YOUR_RELEASE_NAMESPACE} get pods -w

After a rollback, if you ran into issues in production, consider using a testing environment to identify and correct these issues, or reference the troubleshooting documentation.

Thank you for your feedback.
Was this page useful?
Too much on your plate? close cta icon
More features, less infrastructure with Kong Konnect. 1M requests per month for free.
Try it for Free
  • Kong
    Powering the API world

    Increase developer productivity, security, and performance at scale with the unified platform for API management, service mesh, and ingress controller.

    • Products
      • Kong Konnect
      • Kong Gateway Enterprise
      • Kong Gateway
      • Kong Mesh
      • Kong Ingress Controller
      • Kong Insomnia
      • Product Updates
      • Get Started
    • Documentation
      • Kong Konnect Docs
      • Kong Gateway Docs
      • Kong Mesh Docs
      • Kong Insomnia Docs
      • Kong Konnect Plugin Hub
    • Open Source
      • Kong Gateway
      • Kuma
      • Insomnia
      • Kong Community
    • Company
      • About Kong
      • Customers
      • Careers
      • Press
      • Events
      • Contact
  • Terms• Privacy• Trust and Compliance
© Kong Inc. 2025