Skip to content
Kong Docs are moving soon! Our docs are migrating to a new home. You'll be automatically redirected to the new site in the future. In the meantime, view this page on the new site!
Kong Logo | Kong Docs Logo
  • Docs
    • Explore the API Specs
      View all API Specs View all API Specs View all API Specs arrow image
    • Documentation
      API Specs
      Kong Gateway
      Lightweight, fast, and flexible cloud-native API gateway
      Kong Konnect
      Single platform for SaaS end-to-end connectivity
      Kong AI Gateway
      Multi-LLM AI Gateway for GenAI infrastructure
      Kong Mesh
      Enterprise service mesh based on Kuma and Envoy
      decK
      Helps manage Kong’s configuration in a declarative fashion
      Kong Ingress Controller
      Works inside a Kubernetes cluster and configures Kong to proxy traffic
      Kong Gateway Operator
      Manage your Kong deployments on Kubernetes using YAML Manifests
      Insomnia
      Collaborative API development platform
  • Plugin Hub
    • Explore the Plugin Hub
      View all plugins View all plugins View all plugins arrow image
    • Functionality View all View all arrow image
      View all plugins
      AI's icon
      AI
      Govern, secure, and control AI traffic with multi-LLM AI Gateway plugins
      Authentication's icon
      Authentication
      Protect your services with an authentication layer
      Security's icon
      Security
      Protect your services with additional security layer
      Traffic Control's icon
      Traffic Control
      Manage, throttle and restrict inbound and outbound API traffic
      Serverless's icon
      Serverless
      Invoke serverless functions in combination with other plugins
      Analytics & Monitoring's icon
      Analytics & Monitoring
      Visualize, inspect and monitor APIs and microservices traffic
      Transformations's icon
      Transformations
      Transform request and responses on the fly on Kong
      Logging's icon
      Logging
      Log request and response data using the best transport for your infrastructure
  • Support
  • Community
  • Kong Academy
Get a Demo Start Free Trial
Kong Ingress Controller
3.4.x (latest) LTS
  • Home icon
  • Kong Ingress Controller
  • Guides
  • High Availability
  • Fallback Configuration
github-edit-pageEdit this page
report-issueReport an issue
  • Kong Gateway
  • Kong Konnect
  • Kong Mesh
  • Kong AI Gateway
  • Plugin Hub
  • decK
  • Kong Ingress Controller
  • Kong Gateway Operator
  • Insomnia
  • Kuma

  • Docs contribution guidelines
  • unreleased
  • 3.4.x (latest) (LTS)
  • 3.3.x
  • 3.2.x
  • 3.1.x
  • 3.0.x
  • 2.12.x (LTS)
  • 2.11.x
  • 2.10.x
  • 2.9.x
  • 2.8.x
  • 2.7.x
  • 2.6.x
  • 2.5.x (LTS)
  • Introduction
    • Overview
    • Kubernetes Gateway API
    • Version Support Policy
    • Changelog
  • How KIC Works
    • Architecture
    • Gateway API
    • Ingress
    • Custom Resources
    • Using Annotations
    • Admission Webhook
  • Get Started
    • Install KIC
    • Services and Routes
    • Rate Limiting
    • Proxy Caching
    • Key Authentication
  • KIC in Production
    • Deployment Topologies
      • Overview
      • Gateway Discovery
      • Database Backed
      • Traditional (sidecar)
    • Installation Methods
      • Helm
      • Kong Gateway Operator
    • Cloud Deployment
      • Azure
      • Amazon
      • Google
    • Enterprise License
    • Observability
      • Prometheus Metrics
      • Configuring Prometheus and Grafana
      • Kubernetes Events
    • Upgrading
      • Kong Gateway
      • Ingress Controller
  • Guides
    • Service Configuration
      • HTTP Service
      • TCP Service
      • UDP Service
      • gRPC Service
      • TLS
      • External Service
      • HTTPS Redirects
      • Multiple Backend Services
      • Configuring Gateway API resources across namespaces
      • Configuring Custom Kong Entities
    • Request Manipulation
      • Rewriting Hosts and Paths
      • Rewrite Annotation
      • Customizing load-balancing behavior
    • High Availability
      • KIC High Availability
      • Service Health Checks
      • Last Known Good Config
      • Fallback Configuration
    • Security
      • Kong Vaults
      • Using Workspaces
      • Preserving Client IP
      • Kubernetes Secrets in Plugins
      • Verifying Upstream TLS
    • Migrate
      • KongIngress to KongUpstreamPolicy
      • Migrating from Ingress to Gateway
      • Credential Type Labels
    • Customize Deployments
      • Images
    • Custom Ingress Class
      • Internal / External Traffic
  • Plugins
    • Custom Plugins
    • Authentication
    • ACL
    • Rate Limiting
    • mTLS
    • OIDC
  • Reference
    • Troubleshooting
    • Version Compatibility
    • Annotations
    • Configuration Options
    • Feature Gates
    • FAQ
      • Plugin Compatibility
      • Kong Router
      • Custom nginx.conf
    • Custom Resource Definitions
    • Resources Requiring Setting Ingress Class
    • Gateway API migration
    • Required Permissions for Installation
    • Categories of Failures
    • Combining Services From Different HTTPRoutes
enterprise-switcher-icon Switch to OSS
On this pageOn this page
  • Prerequisites
    • Install the Gateway APIs
    • Install Kong
    • Test connectivity to Kong
  • Overview
  • How it works
  • Example Scenario
    • Excluding broken objects
    • Backfilling broken objects
    • Inspecting the Fallback Configuration process

Fallback Configuration

In this guide you’ll learn about the Fallback Configuration feature. We’ll explain its implementation details and provide an example scenario to demonstrate how it works in practice.

Prerequisites: Install Kong Ingress Controller with Gateway API support in your Kubernetes cluster and connect to Kong.

Prerequisites

Install the Gateway APIs

  1. Install the Gateway API CRDs before installing Kong Ingress Controller.

     kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.1.0/standard-install.yaml
    
  2. Create a Gateway and GatewayClass instance to use.

    echo "
    ---
    apiVersion: gateway.networking.k8s.io/v1
    kind: GatewayClass
    metadata:
      name: kong
      annotations:
        konghq.com/gatewayclass-unmanaged: 'true'
    
    spec:
      controllerName: konghq.com/kic-gateway-controller
    ---
    apiVersion: gateway.networking.k8s.io/v1
    kind: Gateway
    metadata:
      name: kong
    spec:
      gatewayClassName: kong
      listeners:
      - name: proxy
        port: 80
        protocol: HTTP
        allowedRoutes:
          namespaces:
             from: All
    " | kubectl apply -f -
    

    The results should look like this:

    gatewayclass.gateway.networking.k8s.io/kong created
    gateway.gateway.networking.k8s.io/kong created
    

Install Kong

You can install Kong in your Kubernetes cluster using Helm.

  1. Add the Kong Helm charts:

     helm repo add kong https://charts.konghq.com
     helm repo update
    
  2. Install Kong Ingress Controller and Kong Gateway with Helm:

     helm install kong kong/ingress -n kong --create-namespace 
    

Test connectivity to Kong

Kubernetes exposes the proxy through a Kubernetes service. Run the following commands to store the load balancer IP address in a variable named PROXY_IP:

  1. Populate $PROXY_IP for future commands:

     export PROXY_IP=$(kubectl get svc --namespace kong kong-gateway-proxy -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
     echo $PROXY_IP
    
  2. Ensure that you can call the proxy IP:

     curl -i $PROXY_IP
    

    The results should look like this:

     HTTP/1.1 404 Not Found
     Content-Type: application/json; charset=utf-8
     Connection: keep-alive
     Content-Length: 48
     X-Kong-Response-Latency: 0
     Server: kong/3.0.0
      
     {"message":"no Route matched with those values"}
    

Overview

Kong Ingress Controller 3.2.0 introduced the Fallback Configuration feature. It is designed to isolate issues related to individual parts of the configuration, allowing updates to the rest of it to proceed with no interruption. If you’re using Kong Ingress Controller in a multi-team environment, the fallback configuration mechanism can help you avoid lock-ups when one team’s configuration is broken.

Note: The Fallback Configuration is an opt-in feature. You must enable it by setting FallbackConfiguration=true in the controller’s feature gates configuration. See Feature Gates to learn how to do that.

How it works

Kong Ingress Controller translates Kubernetes objects it gets from the Kubernetes API and pushes the translation result via Kong’s Admin API to Kong Gateway instances. However, issues can arise at various stages of this process:

  1. Admission Webhook: Validates individual Kubernetes objects against schemas and basic rules.
  2. Translation Process: Detects issues like cross-object validation errors.
  3. Kong Response: Kong rejects the configuration and returns an error associated with a specific object.

Fallback Configuration is triggered when an issue is detected in the 3rd stage and provides the following benefits:

  • Allows unaffected objects to be updated even when there are configuration errors.
  • Automatically builds a fallback configuration that Kong will accept without requiring user intervention by either:
    • Excluding the broken objects along with its dependants.
    • Backfilling the broken object along with its dependants using the last valid Kubernetes objects’ in-memory cache (if CONTROLLER_USE_LAST_VALID_CONFIG_FOR_FALLBACK environment variable is set to true).
  • Enables users to inspect and identify what objects were excluded from or backfilled in the configuration using diagnostic endpoints.

Below table summarizes the behavior of the Fallback Configuration feature based on the configuration:

FallbackConfiguration feature gate value CONTROLLER_USE_LAST_VALID_CONFIG_FOR_FALLBACK value Behavior
false false/true (has no effect) The last valid configuration is used as a whole to recover (if stored).
true false The Fallback Configuration is triggered - broken objects and their dependants are excluded.
true true The Fallback Configuration is triggered - broken objects and their dependants are excluded and backfilled with their last valid version (if stored).

Below diagram illustrates how the Fallback Configuration feature works in detail:

Example Scenario

In this example we’ll demonstrate how the Fallback Configuration works in practice.

Excluding broken objects

First, we’ll demonstrate the default behavior of the Fallback Configuration feature, which is to exclude broken objects and their dependants from the configuration.

To test the Fallback Configuration, make sure your Kong Ingress Controller instance is running with the Fallback Configuration feature and diagnostics server enabled.

helm upgrade --install kong kong/ingress -n kong \
  --set controller.ingressController.env.feature_gates=FallbackConfiguration=true \
  --set controller.ingressController.env.dump_config=true

In the example, we’ll consider a situation where:

  1. We have two HTTPRoutes pointing to the same Service. One of HTTPRoutes is configured with KongPlugins providing authentication and base rate-limiting. Everything works as expected.
  2. We add one more rate-limiting KongPlugin that is to be associated with the second HTTPRoute and a specific KongConsumer so that it can be rate-limited in a different way than the base rate-limiting, but we forget associate the KongConsumer with the KongPlugin. It results in the HTTPRoute being broken because of duplicated rate-limiting plugins.

Deploying valid configuration

First, let’s deploy the Service and its backing Deployment:

kubectl apply -f https://docs.konghq.com/assets/kubernetes-ingress-controller/examples/echo-service.yaml

The results should look like this:

service/echo created
deployment.apps/echo created

Next, let’s deploy the HTTPRoutes. route-b will refer three KongPlugins (key-auth, rate-limit-base, rate-limit-consumer):

echo 'apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: route-a
spec:
  parentRefs:
  - name: kong
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /route-a
    backendRefs:
    - name: echo
      port: 1027
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: route-b
  annotations:
    konghq.com/plugins: key-auth, rate-limit-base, rate-limit-consumer
spec:
  parentRefs:
  - name: kong
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /route-b
    backendRefs:
    - name: echo
      port: 1027
      ' | kubectl apply -f -

The results should look like this:

httproute.gateway.networking.k8s.io/route-a created
httproute.gateway.networking.k8s.io/route-b created

Let’s also create the KongPlugins:

echo 'apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: key-auth
plugin: key-auth
---
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: rate-limit-base
plugin: rate-limiting
config:
  second: 1
  policy: local
---
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: rate-limit-consumer
plugin: rate-limiting
config:
  second: 5
  policy: local' | kubectl apply -f -

The results should look like this:

kongplugin.configuration.konghq.com/key-auth created
kongplugin.configuration.konghq.com/rate-limit-base created
kongplugin.configuration.konghq.com/rate-limit-consumer created

And finally, let’s create the KongConsumer with credentials and the rate-limit-consumer KongPlugin associated:

echo 'apiVersion: v1
kind: Secret
metadata:
  name: bob-key-auth
  labels:
    konghq.com/credential: key-auth
stringData:
  key: bob-password
---
apiVersion: configuration.konghq.com/v1
kind: KongConsumer
metadata:
  name: bob
  annotations:
    konghq.com/plugins: rate-limit-consumer
    kubernetes.io/ingress.class: kong
username: bob
credentials:
- bob-key-auth
' | kubectl apply -f -

Verifying routes are functional

Let’s ensure that the HTTPRoutes are working as expected:

curl -i $PROXY_IP/route-a

The results should look like this:

HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Content-Length: 137
Connection: keep-alive
X-Kong-Upstream-Latency: 1
X-Kong-Proxy-Latency: 1
Via: kong/3.6.0
X-Kong-Request-Id: 5bf50016730eae43c359c17b41dc8614

Welcome, you are connected to node orbstack.
Running on Pod echo-74c66b778-szf8f.
In namespace default.
With IP address 192.168.194.13.

Authenticated requests with the valid apikey header on the route-b should be accepted:

curl -i $PROXY_IP/route-b -H apikey:bob-password

The results should look like this:

HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Content-Length: 137
Connection: keep-alive
X-Kong-Upstream-Latency: 0
X-Kong-Proxy-Latency: 0
Via: kong/3.6.0
X-Kong-Request-Id: 14ae28589baff9459d5bb3476be6f570

Welcome, you are connected to node orbstack.
Running on Pod echo-74c66b778-szf8f.
In namespace default.
With IP address 192.168.194.13.

While the requests without the apikey header should be rejected:

curl -i $PROXY_IP/route-b

The results should look like this:

HTTP/1.1 401 Unauthorized
Content-Type: application/json; charset=utf-8
Connection: keep-alive
WWW-Authenticate: Key realm="kong"
Content-Length: 96
X-Kong-Response-Latency: 0
Server: kong/3.6.0
X-Kong-Request-Id: 520c396c6c32b0400f7c33531b7f9b2c

{
  "message":"No API key found in request",
  "request_id":"520c396c6c32b0400f7c33531b7f9b2c"
}

Introducing a breaking change to the configuration

Now, let’s simulate a situation where we introduce a breaking change to the configuration. We’ll remove the rate-limit-consumer KongPlugin from the KongConsumer so that the route-b will now have two rate-limiting plugins associated with it, which will cause it to break.

 kubectl annotate kongconsumer bob konghq.com/plugins-

The results should look like this:

kongconsumer.configuration.konghq.com/bob annotated

Verifying the broken route was excluded

This will cause the route-b to break as there are two KongPlugins using the same type (rate-limiting). We expect the route to be excluded from the configuration.

Let’s verify this:

curl -i $PROXY_IP/route-b

The results should look like this:

HTTP/1.1 404 Not Found
Content-Type: application/json; charset=utf-8
Connection: keep-alive
Content-Length: 103
X-Kong-Response-Latency: 0
Server: kong/3.6.0
X-Kong-Request-Id: 209a6b14781179103528093188ed4008

{
  "message":"no Route matched with those values",
  "request_id":"209a6b14781179103528093188ed4008"
}%

Inspecting diagnostic endpoints

The route is not configured because the Fallback Configuration mechanism has excluded the broken HTTPRoute.

We can verify this by inspecting the diagnostic endpoint:

kubectl port-forward -n kong deploy/kong-controller 10256 &
curl localhost:10256/debug/config/fallback | jq

The results should look like this:

{
  "status": "triggered",
  "brokenObjects": [
    {
      "group": "configuration.konghq.com",
      "kind": "KongPlugin",
      "namespace": "default",
      "name": "rate-limit-consumer",
      "id": "7167315d-58f5-4aea-8aa5-a9d989f33a49"
    }
  ],
  "excludedObjects": [
    {
      "group": "configuration.konghq.com",
      "kind": "KongPlugin",
      "version": "v1",
      "namespace": "default",
      "name": "rate-limit-consumer",
      "id": "7167315d-58f5-4aea-8aa5-a9d989f33a49",
      "causingObjects": [
        "configuration.konghq.com/KongPlugin:default/rate-limit-consumer"
      ]
    },
    {
      "group": "gateway.networking.k8s.io",
      "kind": "HTTPRoute",
      "version": "v1",
      "namespace": "default",
      "name": "route-b",
      "id": "fc82aa3d-512c-42f2-b7c3-e6f0069fcc94",
      "causingObjects": [
        "configuration.konghq.com/KongPlugin:default/rate-limit-consumer"
      ]
    }
  ]
}

Verifying the working route is still operational and can be updated

We can also ensure the other HTTPRoute is still working:

curl -i $PROXY_IP/route-a

The results should look like this:

HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Content-Length: 137
Connection: keep-alive
X-Kong-Upstream-Latency: 1
X-Kong-Proxy-Latency: 1
Via: kong/3.6.0
X-Kong-Request-Id: 5bf50016730eae43c359c17b41dc8614

Welcome, you are connected to node orbstack.
Running on Pod echo-74c66b778-szf8f.
In namespace default.
With IP address 192.168.194.13.

What’s more, we’re still able to update the correct HTTPRoute without any issues. Let’s modify route-a’s path:

kubectl patch httproute route-a --type merge -p '{"spec":{"rules":[{"matches":[{"path":{"type":"PathPrefix","value":"/route-a-modified"}}],"backendRefs":[{"name":"echo","port":1027}]}]}}'

The results should look like this:

httproute.gateway.networking.k8s.io/route-a patched

Let’s verify the updated HTTPRoute:

curl -i $PROXY_IP/route-a-modified

The results should look like this:

HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Content-Length: 137
Connection: keep-alive
X-Kong-Upstream-Latency: 1
X-Kong-Proxy-Latency: 0
Via: kong/3.6.0
X-Kong-Request-Id: f26ce453eeeda50e3d53a26f44f0f21f

Welcome, you are connected to node orbstack.
Running on Pod echo-74c66b778-szf8f.
In namespace default.
With IP address 192.168.194.13.

The Fallback Configuration mechanism has successfully isolated the broken HTTPRoute and allowed the correct one to be updated.

Backfilling broken objects

Another mode of operation that the Fallback Configuration feature supports is backfilling broken objects with their last valid version. To demonstrate this, we’ll use the same setup as in the default mode, but this time we’ll set the CONTROLLER_USE_LAST_VALID_CONFIG_FOR_FALLBACK environment variable to true.

helm upgrade --install kong kong/ingress -n kong \
--set controller.ingressController.env.feature_gates=FallbackConfiguration=true \
--set controller.ingressController.env.use_last_valid_config_for_fallback=true \
--set controller.ingressController.env.dump_config=true

Attaching the plugin back

As this mode of operation leverages the last valid Kubernetes objects’ cache state, we need to make sure that we begin with a clean slate, allowing Kong Ingress Controller to store it.

Note: Kong Ingress Controller stores the last valid Kubernetes objects’ cache state in memory. It is not persisted across restarts. That means that if you’ve got broken objects in the configuration that were backfilled using the last valid version, after a restart the last valid version will be lost, effectively excluding these objects from the configuration.

Let’s remove one KongPlugin so we get an entirely valid configuration:

kubectl annotate kongconsumer bob konghq.com/plugins=rate-limit-consumer

The results should look like this:

kongconsumer.configuration.konghq.com/bob annotated

Verifying both routes are operational again

Now, let’s verify that both HTTPRoutes are operational back again.

curl -i $PROXY_IP/route-a-modified
curl -i $PROXY_IP/route-b

The results should look like this:

HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Content-Length: 137
Connection: keep-alive
X-Kong-Upstream-Latency: 2
X-Kong-Proxy-Latency: 0
Via: kong/3.6.0
X-Kong-Request-Id: 0d91bf2d355ede4d2b01c3306886c043

Welcome, you are connected to node orbstack.
Running on Pod echo-74c66b778-szf8f.
In namespace default.
With IP address 192.168.194.13.
HTTP/1.1 401 Unauthorized
Content-Type: application/json; charset=utf-8
Connection: keep-alive
WWW-Authenticate: Key realm="kong"
Content-Length: 96
X-Kong-Response-Latency: 0
Server: kong/3.6.0
X-Kong-Request-Id: 0bc94b381edeb52f5a41e23a260afe40

{
  "message":"No API key found in request",
  "request_id":"0bc94b381edeb52f5a41e23a260afe40"
}

Breaking the route again

As we’ve verified that both HTTPRoutes are operational, let’s break route-b again by removing the rate-limit-consumer KongPlugin from the KongConsumer:

kubectl annotate kongconsumer bob konghq.com/plugins-

The results should look like this:

kongconsumer.configuration.konghq.com/bob annotated

Verifying the broken route was backfilled

Backfilling the broken HTTPRoute with its last valid version should have restored the route to its last valid working state. That means we should be able to access route-b as before the breaking change:

curl -i $PROXY_IP/route-b

The results should look like this:

HTTP/1.1 401 Unauthorized
Date: Mon, 10 Jun 2024 14:00:38 GMT
Content-Type: application/json; charset=utf-8
Connection: keep-alive
WWW-Authenticate: Key realm="kong"
Content-Length: 96
X-Kong-Response-Latency: 5
Server: kong/3.6.0
X-Kong-Request-Id: 4604f84de6ed0b1a9357e935da5cea2c

{
  "message":"No API key found in request",
  "request_id":"4604f84de6ed0b1a9357e935da5cea2c"
}

Inspecting diagnostic endpoints

Using diagnostic endpoints, we can now inspect the objects that were excluded and backfilled in the configuration:

kubectl port-forward -n kong deploy/kong-controller 10256 &
curl localhost:10256/debug/config/fallback | jq

The results should look like this:

{
  "status": "triggered",
  "brokenObjects": [
    {
      "group": "configuration.konghq.com",
      "kind": "KongPlugin",
      "namespace": "default",
      "name": "rate-limit-consumer",
      "id": "7167315d-58f5-4aea-8aa5-a9d989f33a49"
    }
  ],
  "excludedObjects": [
    {
      "group": "configuration.konghq.com",
      "kind": "KongPlugin",
      "version": "v1",
      "namespace": "default",
      "name": "rate-limit-consumer",
      "id": "7167315d-58f5-4aea-8aa5-a9d989f33a49",
      "causingObjects": [
        "configuration.konghq.com/KongPlugin:default/rate-limit-consumer"
      ]
    },
    {
      "group": "gateway.networking.k8s.io",
      "kind": "HTTPRoute",
      "version": "v1",
      "namespace": "default",
      "name": "route-b",
      "id": "fc82aa3d-512c-42f2-b7c3-e6f0069fcc94",
      "causingObjects": [
        "configuration.konghq.com/KongPlugin:default/rate-limit-consumer"
      ]
    }
  ],
  "backfilledObjects": [
    {
      "group": "configuration.konghq.com",
      "kind": "KongPlugin",
      "version": "v1",
      "namespace": "default",
      "name": "rate-limit-consumer",
      "id": "7167315d-58f5-4aea-8aa5-a9d989f33a49",
      "causingObjects": [
        "configuration.konghq.com/KongPlugin:default/rate-limit-consumer"
      ]
    },
    {
      "group": "configuration.konghq.com",
      "kind": "KongConsumer",
      "version": "v1",
      "namespace": "default",
      "name": "bob",
      "id": "deecb7c5-a3f6-4b88-a875-0e1715baa7c3",
      "causingObjects": [
        "configuration.konghq.com/KongPlugin:default/rate-limit-consumer"
      ]
    },
    {
      "group": "gateway.networking.k8s.io",
      "kind": "HTTPRoute",
      "version": "v1",
      "namespace": "default",
      "name": "route-b",
      "id": "fc82aa3d-512c-42f2-b7c3-e6f0069fcc94",
      "causingObjects": [
        "configuration.konghq.com/KongPlugin:default/rate-limit-consumer",
        "gateway.networking.k8s.io/HTTPRoute:default/route-b"
      ]
    }
  ]
}

As rate-limit-consumer and route-b were reported back as broken by the Kong Gateway, they were excluded from the configuration. However, the Fallback Configuration mechanism backfilled them with their last valid version, restoring the route to its working state. You may notice that also the KongConsumer was backfilled - this is because the KongConsumer was depending on the rate-limit-consumer plugin in the last valid state.

Note: The Fallback Configuration mechanism will attempt to backfill all the broken objects along with their direct and indirect dependants. The dependencies are resolved based on the last valid Kubernetes objects’ cache state.

Modifying the affected objects

As we’re now relying on the last valid version of the broken objects and their dependants, we won’t be able to effectively modify them until we fix the problems. Let’s try and add another key for the bob KongConsumer:

Create a new Secret with a new key:

echo 'apiVersion: v1
kind: Secret
metadata:
  name: bob-key-auth-new
  labels:
    konghq.com/credential: key-auth
stringData:
  key: bob-new-password' | kubectl apply -f -

The results should look like this:

secret/bob-key-auth-new created

Associate the new Secret with the KongConsumer:

kubectl patch kongconsumer bob --type merge -p '{"credentials":["bob-key-auth", "bob-key-auth-new"]}'

The results should look like this:

kongconsumer.configuration.konghq.com/bob patched

The change won’t be effective as the HTTPRoute and KongPlugin are still broken. We can verify this by trying to access the route-b with the new key:

curl -i $PROXY_IP/route-b -H apikey:bob-new-password

The results should look like this:

HTTP/1.1 401 Unauthorized
Content-Type: application/json; charset=utf-8
Connection: keep-alive
Content-Length: 81
X-Kong-Response-Latency: 2
Server: kong/3.6.0
X-Kong-Request-Id: 4c706c7e4e06140e56453b22e169df0a

{
  "message":"Unauthorized",
  "request_id":"4c706c7e4e06140e56453b22e169df0a"
}

Modifying the working route

On the other hand, we can still modify the working HTTPRoute:

kubectl patch httproute route-a --type merge -p '{"spec":{"rules":[{"matches":[{"path":{"type":"PathPrefix","value":"/route-a-modified-again"}}],"backendRefs":[{"name":"echo","port":1027}]}]}}'

The results should look like this:

httproute.gateway.networking.k8s.io/route-a patched

Let’s verify the updated HTTPRoute:

curl -i $PROXY_IP/route-a-modified-again

The results should look like this:

HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Content-Length: 136
Connection: keep-alive
X-Kong-Upstream-Latency: 2
X-Kong-Proxy-Latency: 0
Via: kong/3.6.0
X-Kong-Request-Id: 4369f15cf27cf16f5a2c82061b8d3950

Welcome, you are connected to node orbstack.
Running on Pod echo-bf9d56995-r8c86.
In namespace default.
With IP address 192.168.194.8.

Fixing the broken route

To fix the broken HTTPRoute, we need to associate the rate-limit-consumer KongPlugin back with the KongConsumer:

kubectl annotate kongconsumer bob konghq.com/plugins=rate-limit-consumer

This should unblock the changes we’ve made to the bob-key-auth Secret. Let’s verify this by accessing the route-b with the new key:

curl -i $PROXY_IP/route-b -H apikey:bob-new-password

The results should look like this now:

HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Content-Length: 136
Connection: keep-alive
X-RateLimit-Limit-Second: 5
RateLimit-Limit: 5
RateLimit-Remaining: 4
RateLimit-Reset: 1
X-RateLimit-Remaining-Second: 4
X-Kong-Upstream-Latency: 2
X-Kong-Proxy-Latency: 2
Via: kong/3.6.0
X-Kong-Request-Id: 183ecc2973f16529a314ca5bf205eb73

Welcome, you are connected to node orbstack.
Running on Pod echo-bf9d56995-r8c86.
In namespace default.
With IP address 192.168.194.8.

Inspecting the Fallback Configuration process

Each time Kong Ingress Controller successfully applies a fallback configuration, it emits a Kubernetes Event with the FallbackKongConfigurationSucceeded reason. It will also emit an Event with FallbackKongConfigurationApplyFailed reason in case the fallback configuration gets rejected by Kong Gateway. You can monitor these events to track the fallback configuration process.

You can check the Event gets emitted by running:

kubectl get events -A --field-selector='reason=FallbackKongConfigurationSucceeded'

The results should look like this:

NAMESPACE   LAST SEEN   TYPE     REASON                               OBJECT                                 MESSAGE
kong        4m26s       Normal   FallbackKongConfigurationSucceeded   pod/kong-controller-7f4fd47bb7-zdktb   successfully applied fallback Kong configuration to https://192.168.194.11:8444

Another way to monitor the Fallback Configuration mechanism is by Prometheus metrics. Please refer to the Prometheus Metrics for more information.

Thank you for your feedback.
Was this page useful?
Too much on your plate? close cta icon
More features, less infrastructure with Kong Konnect. 1M requests per month for free.
Try it for Free
  • Kong
    Powering the API world

    Increase developer productivity, security, and performance at scale with the unified platform for API management, service mesh, and ingress controller.

    • Products
      • Kong Konnect
      • Kong Gateway Enterprise
      • Kong Gateway
      • Kong Mesh
      • Kong Ingress Controller
      • Kong Insomnia
      • Product Updates
      • Get Started
    • Documentation
      • Kong Konnect Docs
      • Kong Gateway Docs
      • Kong Mesh Docs
      • Kong Insomnia Docs
      • Kong Konnect Plugin Hub
    • Open Source
      • Kong Gateway
      • Kuma
      • Insomnia
      • Kong Community
    • Company
      • About Kong
      • Customers
      • Careers
      • Press
      • Events
      • Contact
  • Terms• Privacy• Trust and Compliance
© Kong Inc. 2025