You are browsing documentation for an outdated version.
See the latest documentation here.
A Kong cluster allows you to scale the system horizontally by adding more
machines to handle more incoming requests. They will all share the same
configuration since they point to the same database. Kong nodes pointing to the
same datastore will be part of the same Kong cluster.
You need a load balancer in front of your Kong cluster to distribute traffic
across your available nodes.
What a Kong cluster does and doesn’t do
Having a Kong cluster does not mean that your clients traffic will be
load-balanced across your Kong nodes out of the box. You still need a
load-balancer in front of your Kong nodes to distribute your traffic. Instead,
a Kong cluster means that those nodes will share the same configuration.
For performance reasons, Kong avoids database connections when proxying
requests, and caches the contents of your database in memory. The cached
entities include Services, Routes, Consumers, Plugins, Credentials, and so on. Since those
values are in memory, any change made via the Admin API of one of the nodes
needs to be propagated to the other nodes.
This document describes how those cached entities are being invalidated and how
to configure your Kong nodes for your use case, which lies somewhere between
performance and consistency.
Single node Kong clusters
A single Kong node connected to a database creates a
Kong cluster of one node. Any changes applied via the Admin API of this node
will instantly take effect. Example:
Consider a single Kong node
A. If we delete a previously registered Service:
curl -X DELETE http://127.0.0.1:8001/services/test-service
Then any subsequent request to
A would instantly return
404 Not Found, as
the node purged it from its local cache:
curl -i http://127.0.0.1:8000/test-service
Multiple nodes Kong clusters
In a cluster of multiple Kong nodes, other nodes connected to the same database
would not instantly be notified that the Service was deleted by node
the Service is not in the database anymore (it was deleted by node
A), it is
still in node
All nodes perform a periodic background job to synchronize with changes that
may have been triggered by other nodes. The frequency of this job can be
db_update_frequency seconds, all running Kong nodes will poll the
database for any update, and will purge the relevant entities from their cache
If we delete a Service from node
A, this change will not be effective in node
B until node
Bs next database poll, which will occur up to
db_update_frequency seconds later (though it could happen sooner).
This makes Kong clusters eventually consistent.
Use read-only replicas when deploying Kong clusters with PostgresSQL
When using Postgres as the backend storage, you can optionally enable
Kong to serve read queries from a separate database instance.
Enabling the read-only connection support in Kong
greatly reduces the load on the main database instance since read-only
queries are no longer sent to it.
To learn more about how to configure this feature, refer to the
of the Configuration reference.
What is being cached?
All of the core entities such as Services, Routes, Plugins, Consumers, Credentials are
cached in memory by Kong and depend on their invalidation via the polling
mechanism to be updated.
Additionally, Kong also caches database misses. This means that if you
configure a Service with no plugin, Kong will cache this information. Example:
A, we add a Service and a Route:
# node A
curl -X POST http://127.0.0.1:8001/services \
--data "name=example-service" \
curl -X POST http://127.0.0.1:8001/services/example-service/routes \
(Note that we used
/services/example-service/routes as a shortcut: we
could have used the
/routes endpoint instead, but then we would need to
service_id as an argument, with the UUID of the new Service.)
A request to the Proxy port of both node
B will cache this Service, and
the fact that no plugin is configured on it:
# node A
# node B
Now, say we add a plugin to this Service via node
A’s Admin API:
# node A
curl -X POST http://127.0.0.1:8001/services/example-service/plugins \
Because this request was issued via node
A’s Admin API, node
A will locally
invalidate its cache and on subsequent requests, it will detect that this API
has a plugin configured.
B hasn’t run a database poll yet, and still caches that this
API has no plugin to run. It will be so until node
B runs its database
Conclusion: All CRUD operations trigger cache invalidations. Creation
PUT) will invalidate cached database misses, and update/deletion
DELETE) will invalidate cached database hits.
How to configure database caching?
You can configure three properties in the Kong configuration file, the most
important one being
db_update_frequency, which determine where your Kong
nodes stand on the performance versus consistency trade-off.
Kong comes with default values tuned for consistency so that you can
experiment with its clustering capabilities while avoiding surprises. As you
prepare a production setup, you should consider tuning those values to ensure
that your performance constraints are respected.
db_update_frequency (default: 5s)
This value determines the frequency at which your Kong nodes will be polling
the database for invalidation events. A lower value means that the polling
job will execute more frequently, but that your Kong nodes will keep up
with changes you apply. A higher value means that your Kong nodes will
spend less time running the polling jobs, and will focus on proxying your
Note: Changes propagate through the cluster in up to
db_update_frequency entry in the configuration reference documentation.
db_update_propagation (default: 0s)
If your database itself is eventually consistent (that is, Cassandra), you must
configure this value.
Setting this parameter ensures that the change has time to
propagate across your database nodes. When set, Kong nodes receiving invalidation events
from their polling jobs will delay the purging of their cache for
If a Kong node connected to an eventually consistent database was not delaying
the event handling, it could purge its cache, only to cache the non-updated
value again (because the change hasn’t propagated through the database yet)!
You should set this value to an estimate of the amount of time your database
cluster takes to propagate changes.
Note: When this value is set, changes propagate through the cluster in
db_update_frequency + db_update_propagation seconds.
db_update_propagation entry in the configuration reference documentation.
db_cache_ttl (default: 0s)
The time (in seconds) for which Kong will cache database entities (both hits
and misses). This Time-To-Live value acts as a safeguard in case a Kong node
misses an invalidation event, to avoid it from running on stale data for too
long. When the TTL is reached, the value will be purged from its cache, and the
next database result will be cached again.
By default, no data is invalidated based on this TTL (the default value is
This is usually fine: Kong nodes rely on invalidation events, which are handled
at the db store level. If you are concerned that a Kong
node might miss invalidation event for any reason, you should set a TTL. Otherwise
the node might run with a stale value in its cache for an undefined amount of time
until the cache is manually purged, or the node is restarted.
db_cache_ttl entry in the configuration reference documentation.
When using Cassandra
If you use Cassandra as your Kong database, you must set
db_update_propagation to a non-zero value. Since
Cassandra is eventually consistent by nature, this will ensure that Kong nodes
do not prematurely invalidate their cache, only to fetch and catch a
not up-to-date entity again. Kong will present you a warning in logs if you did
not configure this value when using Cassandra.
Additionally, you might want to configure
cassandra_consistency to a value
LOCAL_QUORUM, to ensure that values being cached by your
Kong nodes are up-to-date values from your database.
cassandra_refresh_frequency option to
0 is not advised, as a Kong
restart will be required to discover any changes to the Cassandra cluster topology.
Interacting with the cache via the Admin API
If for some reason, you want to investigate the cached values, or manually
invalidate a value cached by Kong (a cached hit or miss), you can do so via the
Note: Retrieving the
cache_key for each entity being cached by Kong is
currently an undocumented process. Future versions of the Admin API will make
this process easier.
Inspect a cached value
If a value with that key is cached:
Purge a cached value
Purge a node’s cache
Note: Be wary of using this endpoint on a node running in production with warm cache.
If the node is receiving a lot of traffic, purging its cache at the same time
will trigger many requests to your database, and could cause a