You are browsing documentation for an outdated plugin version.
Configuration
This plugin is compatible with DB-less mode.
Compatible protocols
The Kafka Upstream plugin is compatible with the following protocols:
grpc
, grpcs
, http
, https
Parameters
Here's a list of all the parameters which can be used in this plugin's configuration:
-
name or plugin
string requiredThe name of the plugin, in this case
kafka-upstream
.- If using the Kong Admin API, Konnect API, declarative configuration, or decK files, the field is
name
. - If using the KongPlugin object in Kubernetes, the field is
plugin
.
- If using the Kong Admin API, Konnect API, declarative configuration, or decK files, the field is
-
instance_name
stringAn optional custom name to identify an instance of the plugin, for example
kafka-upstream_my-service
.The instance name shows up in Kong Manager and in Konnect, so it's useful when running the same plugin in multiple contexts, for example, on multiple services. You can also use it to access a specific plugin instance via the Kong Admin API.
An instance name must be unique within the following context:
- Within a workspace for Kong Gateway Enterprise
- Within a control plane or control plane group for Konnect
- Globally for Kong Gateway (OSS)
-
service.name or service.id
stringThe name or ID of the service the plugin targets. Set one of these parameters if adding the plugin to a service through the top-level
/plugins
endpoint. Not required if using/services/{serviceName|Id}/plugins
. -
route.name or route.id
stringThe name or ID of the route the plugin targets. Set one of these parameters if adding the plugin to a route through the top-level
/plugins
endpoint. Not required if using/routes/{routeName|Id}/plugins
. -
consumer.name or consumer.id
stringThe name or ID of the consumer the plugin targets. Set one of these parameters if adding the plugin to a consumer through the top-level
/plugins
endpoint. Not required if using/consumers/{consumerName|Id}/plugins
. -
enabled
boolean default:true
Whether this plugin will be applied.
-
config
record required-
bootstrap_servers
set of typerecord
Set of bootstrap brokers in a
{host: host, port: port}
list format.-
host
string required
-
port
integer required between:0
65535
-
-
topic
string requiredThe Kafka topic to publish to.
-
timeout
integer default:10000
Socket timeout in milliseconds.
-
keepalive
integer default:60000
Keepalive timeout in milliseconds.
-
keepalive_enabled
boolean default:false
-
authentication
record required-
strategy
string Must be one of:sasl
The authentication strategy for the plugin, the only option for the value is
sasl
.
-
mechanism
string Must be one of:PLAIN
,SCRAM-SHA-256
,SCRAM-SHA-512
The SASL authentication mechanism.
Supported options:
PLAIN
,SCRAM-SHA-256
, orSCRAM-SHA-512
.
-
tokenauth
booleanEnable this to indicate
DelegationToken
authentication.
-
user
string referenceable encryptedUsername for SASL authentication.
-
password
string referenceable encryptedPassword for SASL authentication.
-
-
security
record required-
certificate_id
stringUUID of certificate entity for mTLS authentication.
-
ssl
booleanEnables TLS.
-
-
forward_method
boolean default:false
Include the request method in the message. At least one of these must be true:
forward_method
,forward_uri
,forward_headers
,forward_body
.
-
forward_uri
boolean default:false
Include the request URI and URI arguments (as in, query arguments) in the message. At least one of these must be true:
forward_method
,forward_uri
,forward_headers
,forward_body
.
-
forward_headers
boolean default:false
Include the request headers in the message. At least one of these must be true:
forward_method
,forward_uri
,forward_headers
,forward_body
.
-
forward_body
boolean default:true
Include the request body in the message. At least one of these must be true:
forward_method
,forward_uri
,forward_headers
,forward_body
.
-
cluster_name
stringAn identifier for the Kafka cluster. By default, this field generates a random string. You can also set your own custom cluster identifier.
If more than one Kafka plugin is configured without a
cluster_name
(that is, if the default autogenerated value is removed), these plugins will use the same producer, and by extension, the same cluster. Logs will be sent to the leader of the cluster.
-
producer_request_acks
integer default:1
Must be one of:-1
,0
,1
The number of acknowledgments the producer requires the leader to have received before considering a request complete. Allowed values: 0 for no acknowledgments; 1 for only the leader; and -1 for the full ISR (In-Sync Replica set).
-
producer_request_timeout
integer default:2000
Time to wait for a Produce response in milliseconds.
-
producer_request_limits_messages_per_request
integer default:200
Maximum number of messages to include into a single Produce request.
-
producer_request_limits_bytes_per_request
integer default:1048576
Maximum size of a Produce request in bytes.
-
producer_request_retries_max_attempts
integer default:10
Maximum number of retry attempts per single Produce request.
-
producer_request_retries_backoff_timeout
integer default:100
Backoff interval between retry attempts in milliseconds.
-
producer_async
boolean default:true
Flag to enable asynchronous mode.
-
producer_async_flush_timeout
integer default:1000
Maximum time interval in milliseconds between buffer flushes in asynchronous mode.
-
producer_async_buffering_limits_messages_in_memory
integer default:50000
Maximum number of messages that can be buffered in memory in asynchronous mode.
-