This plugin transforms requests into Kafka messages in an Apache Kafka topic. For more information, see Kafka topics.
Kong also provides a Kafka Log plugin for publishing logs to a Kafka topic. See Kafka Log.
Configuration Reference
This plugin is compatible with DB-less mode.
In DB-less mode, you configure Kong Gateway declaratively. Therefore, the Admin API is mostly read-only. The only tasks it can perform are all related to handling the declarative config, including:
- Setting a target's health status in the load balancer
- Validating configurations against schemas
- Uploading the declarative configuration using the
/config
endpoint
Example plugin configuration
A plugin which is not associated to any service, route, or consumer is considered global, and will be run on every request. Read the Plugin Reference and the Plugin Precedence sections for more information.
The following examples provide some typical configurations for enabling
the kafka-upstream
plugin globally.
Parameters
Here's a list of all the parameters which can be used in this plugin's configuration:
Form Parameter | Description |
---|---|
name
required Type: string |
The name of the plugin, in this case kafka-upstream . |
enabled
required Type: boolean Default value: true |
Whether this plugin will be applied. |
config.bootstrap_servers
required Type: set of record elements |
Set of bootstrap brokers in a |
config.topic
required Type: string |
The Kafka topic to publish to. |
config.authentication.strategy
optional Type: string |
The authentication strategy for the plugin, the only option for the value is |
config.authentication.mechanism
optional Type: string |
The SASL authentication mechanism, the two options for the value are: |
config.authentication.user
optional Type: string |
Username for SASL authentication. |
config.authentication.password
optional Type: string |
Password for SASL authentication. |
config.authentication.tokenauth
optional Type: boolean Default value: false
|
Enable this to indicate |
config.security.ssl
optional Type: boolean Default value: false
|
Enables TLS. |
config.security.certificate_id
optional Type: string |
UUID of certificate entity for mTLS authentication. |
config.timeout
optional Type: integer Default value: 10000
|
Socket timeout in milliseconds. |
config.keepalive
optional Type: integer Default value: 60000
|
Keepalive timeout in milliseconds. |
config.forward_method
semi-optional Type: boolean Default value: false
|
Include the request method in the message. At least one of these must be true:
|
config.forward_uri
semi-optional Type: boolean Default value: false
|
Include the request URI and URI arguments (as in, query arguments) in the message.
At least one of these must be true: |
config.forward_headers
semi-optional Type: boolean Default value: false
|
Include the request headers in the message. At least one of these must be true:
|
config.forward_body
semi-optional Type: boolean Default value: true
|
Include the request body in the message. At least one of these must be true:
|
config.producer_request_acks
optional Type: integer Default value: 1
|
The number of acknowledgments the producer requires the leader to have received before considering a request complete. Allowed values: 0 for no acknowledgments; 1 for only the leader; and -1 for the full ISR (In-Sync Replica set). |
config.producer_request_timeout
optional Type: integer Default value: 2000
|
Time to wait for a Produce response in milliseconds. |
config.producer_request_limits_messages_per_request
optional Type: integer Default value: 200
|
Maximum number of messages to include into a single Produce request. |
config.producer_request_limits_bytes_per_request
optional Type: integer Default value: 1048576
|
Maximum size of a Produce request in bytes. |
config.producer_request_retries_max_attempts
optional Type: integer Default value: 10
|
Maximum number of retry attempts per single Produce request. |
config.producer_request_retries_backoff_timeout
optional Type: integer Default value: 100
|
Backoff interval between retry attempts in milliseconds. |
config.producer_async
optional Type: boolean Default value: true
|
Flag to enable asynchronous mode. |
config.producer_async_flush_timeout
optional Type: integer Default value: 1000
|
Maximum time interval in milliseconds between buffer flushes in asynchronous mode. |
config.producer_async_buffering_limits_messages_in_memory
optional Type: integer Default value: 50000
|
Maximum number of messages that can be buffered in memory in asynchronous mode. |
Enable on a service-less route
curl -X POST http://kong:8001/routes/my-route/plugins \
--data "name=kafka-upstream" \
--data "config.bootstrap_servers[1].host=localhost" \
--data "config.bootstrap_servers[1].port=9092" \
--data "config.topic=kong-upstream" \
--data "config.timeout=10000" \
--data "config.keepalive=60000" \
--data "config.forward_method=false" \
--data "config.forward_uri=false" \
--data "config.forward_headers=false" \
--data "config.forward_body=true" \
--data "config.producer_request_acks=1" \
--data "config.producer_request_timeout=2000" \
--data "config.producer_request_limits_messages_per_request=200" \
--data "config.producer_request_limits_bytes_per_request=1048576" \
--data "config.producer_request_retries_max_attempts=10" \
--data "config.producer_request_retries_backoff_timeout=100" \
--data "config.producer_async=true" \
--data "config.producer_async_flush_timeout=1000" \
--data "config.producer_async_buffering_limits_messages_in_memory=50000"
Implementation details
This plugin uses the lua-resty-kafka client.
When encoding request bodies, several things happen:
- For requests with a content-type header of
application/x-www-form-urlencoded
,multipart/form-data
, orapplication/json
, this plugin passes the raw request body in thebody
attribute, and tries to return a parsed version of those arguments inbody_args
. If this parsing fails, an error message is returned and the message is not sent. - If the
content-type
is nottext/plain
,text/html
,application/xml
,text/xml
, orapplication/soap+xml
, then the body will be base64-encoded to ensure that the message can be sent as JSON. In such a case, the message has an extra attribute calledbody_base64
set totrue
.
TLS
Enable TLS by setting config.security.ssl
to true
.
mTLS
Enable mTLS by setting a valid UUID of a certificate in config.security.certificate_id
.
Note that this option needs config.security.ssl
set to true.
See Certificate Object
in the Admin API documentation for information on how to set up Certificates.
SASL Authentication
To use SASL authentication, set the configuration option config.authentication.strategy
to sasl
.
Make sure that these mechanism are enabled on the Kafka side as well.
This plugin supports multiple authentication mechanisms including the following:
- PLAIN: Enable this mechanism by setting
config.authentication.mechanism
toPLAIN
. You also need to provide a username and password with the config optionsconfig.authentication.user
andconfig.authentication.password
respectively. - SCRAM-SHA-256: Enable this mechanism by setting
config.authentication.mechanism
toSCRAM-SHA-256
. You also need to provide a username and password with the config optionsconfig.authentication.user
andconfig.authentication.password
respectively.In cryptography, the Salted Challenge Response Authentication Mechanism (SCRAM) is a family of modern, password-based challenge–response authentication mechanisms providing authentication of a user to a server.
-
Delegation Tokens: Delegation Tokens can be generated in Kafka and then used to authenticate this plugin.
Delegation Tokens
leverage theSCRAM-SHA-256
authentication mechanism. ThetokenID
is provided with theconfig.authentication.user
field and thetoken-hmac
is provided with theconfig.authentication.password
field. To indicate that a token is used you have to set theconfig.authentication.tokenauth
setting totrue
.Read more on how to create, renew and revoke delegation tokens.
Known issues and limitations
Known limitations:
- Message compression is not supported.
- The message format is not customizable.
Quickstart
The following steps assume that Kong Gateway is installed and the Kafka Upstream plugin is enabled.
Note: We use
zookeeper
in the following example, which is not required or has been removed on some Kafka versions. Refer to the Kafka ZooKeeper documentation for more information.
-
Create a
kong-upstream
topic in your Kafka cluster:${KAFKA_HOME}/bin/kafka-topics.sh --create \ --zookeeper localhost:2181 \ --replication-factor 1 \ --partitions 10 \ --topic kong-upstream
-
Create a Service-less Route, and add the
kafka-upstream
plugin to it:curl -X POST http://localhost:8001/routes \ --data "name=kafka-upstream" \ --data "hosts[]=kafka-upstream.dev"
curl -X POST http://localhost:8001/routes/kafka-upstream/plugins \ --data "name=kafka-upstream" \ --data "config.bootstrap_servers[1].host=localhost" \ --data "config.bootstrap_servers[1].port=9092" \ --data "config.topic=kong-upstream"
-
In a different console, start a Kafka consumer:
${KAFKA_HOME}/bin/kafka-console-consumer.sh \ --bootstrap-server localhost:9092 \ --topic kong-upstream \ --partition 0 \ --from-beginning \ --timeout-ms 1000
-
Make sample requests:
curl -X POST http://localhost:8000 --header 'Host: kafka-upstream.dev' foo=bar
You should receive a
200 { message: "message sent" }
response, and should see the request bodies appear on the Kafka consumer console you started in the previous step.