Related Documentation
Made by
Kong Inc.
Supported Gateway Topologies
hybrid db-less traditional
Supported Konnect Deployments
hybrid cloud-gateways serverless
Compatible Protocols
grpc grpcs http https ws wss
Minimum Version
Kong Gateway - 3.10

This plugin consumes messages from Apache Kafka topics and makes them available through HTTP endpoints. For more information, see Kafka topics.

Note: This plugin has the following known limitations:

  • Message compression is not supported.
  • The message format is not customizable.
  • Kong Gateway does not support Kafka 4.0.

Kong also provides Kafka plugins for publishing messages:

Implementation details

The plugin supports the following modes of operation:

  • http-get: Consume messages via an HTTP GET requests (default)
  • server-sent-events: Stream messages using server-sent events

  • websocket v3.11+: Stream messages over a WebSocket connection

WebSocket mode v3.11+

In websocket mode, the plugin maintains a bi-directional WebSocket connection with the client. This allows:

  • Continuous delivery of Kafka messages to the client
  • Optional client acknowledgments (client-acks) for each message or batch, enabling at-least-once delivery semantics
  • Real-time message flow without the limitations of HTTP polling

To consume messages via WebSocket:

  1. Establish a WebSocket connection to the route where the plugin is enabled and mode is set to websocket
  2. Optionally, send acknowledgment messages to indicate successful processing
  3. Messages will be streamed as text frames in JSON format

This mode provides parity with HTTP-based consumption, including support for:

  • Message keys
  • Topic filtering
  • Kafka authentication and TLS
  • Auto or manual offset commits

Message delivery guarantees

When running multiple Data Plane nodes, there is no thread-safe behavior between nodes. In high-load scenarios, you may observe the same message being delivered multiple times across different Data Plane nodes

To minimize duplicate message delivery in a multi-node setup, consider:

  • Using a single Data Plane node for consuming messages from specific topics
  • Implementing idempotency handling in your consuming application
  • Monitoring Consumer Group offsets across your Data Plane nodes

Schema registry support v3.11+

The Kafka Consume plugin supports integration with Confluent Schema Registry for AVRO and JSON schemas.

Schema registries provide a centralized repository for managing and validating schemas for data formats like AVRO and JSON. Integrating with a schema registry allows the plugin to validate and serialize/deserialize messages in a standardized format.

Using a schema registry with Kong Gateway provides several benefits:

  • Data validation: Ensures messages conform to a predefined schema before being processed.
  • Schema evolution: Manages schema changes and versioning.
  • Interoperability: Enables seamless communication between different services using standardized data formats.
  • Reduced overhead: Minimizes the need for custom validation logic in your applications.

To learn more about Kong’s supported schema registry, see:

How schema registry validation works

When a consumer plugin is configured with a schema registry, the following workflow occurs:

Configure schema registry

To configure Schema Registry with the Kafka Consume plugin, use the config.schema_registry parameter in your plugin configuration.

See the schema registry configuration example for sample configuration values.

Did this doc help?

Something wrong?

Help us make these docs great!

Kong Developer docs are open source. If you find these useful and want to make them better, contribute today!