You are browsing documentation for an older version. See the latest documentation here.
Performance testing benchmarks
As of Kong Gateway 3.6.x, Kong publishes performance results on Kong Gateway, along with the test methodology and details. Kong plans to conduct and publish Kong Gateway performance results for each subsequent minor release.
In addition to viewing our performance test results, you can use our public test suite to conduct your own performance tests with Kong Gateway.
Kong Gateway performance testing method and results for 3.7.x
Kong tests performance results for Kong Gateway using our public test suite.
The following sections explain the test methodology, results, and configuration.
Test method
The performance tests cover a number of baseline configurations and common use cases of Kong Gateway. The following describes the test cases used and the configuration methodology:
- Environment: Kubernetes environment on AWS infrastructure.
-
Test use cases:
- Basic Kong Gateway proxy.
- Rate limiting a request with no authentication.
- Authentication using the Basic Auth plugin and rate limiting.
- Authentication using the Key Auth plugin and rate limiting.
- Routes and consumers: Each case was tested with two different options: one with one route and one consumer, and one with 100 routes and 100 consumers, for a total of eight test cases. For test cases that didn’t require authentication, no consumers were used.
- Traffic distribution: Normal distribution across both routes and consumers.
- Protocol: HTTPS only.
- Sample size: Each test case was run five times, each for a duration of 15 minutes. The results are an average of the five different test runs.
Kong Gateway 3.7.x performance benchmark results
Test type | Number of routes/consumers | Requests per second (RPS) | P99 (ms) | P95 (ms) |
---|---|---|---|---|
Kong proxy with no plugins | 1 route, 0 consumers | 137358.8 | 7.25 | 4.06 |
Kong proxy with no plugins | 100 routes, 0 consumers | 133953.4 | 7.20 | 4.17 |
Rate limit and no auth | 1 route, 0 consumers | 121737.2 | 7.69 | 4.01 |
Rate limit and no auth | 100 routes, 0 consumers | 117521.4 | 8.53 | 4.22 |
Rate limit and key auth | 1 route, 1 consumer | 103777.6 | 9.43 | 4.39 |
Rate limit and key auth | 100 routes, 100 consumers | 98777.5 | 9.16 | 4.79 |
Rate limit and basic auth | 1 route, 1 consumer | 97397.6 | 9.69 | 4.93 |
Rate limit and basic auth | 100 routes, 100 consumers | 92372.6 | 10.17 | 5.31 |
Test environment
Kong ran these tests in AWS using EC2 machines. We used Kubernetes taints to ensure that Kong Gateway is on its own node while the load testing and observability tools are on their own separate nodes in the same cluster.
The Kong Gateway ran on a single dedicated instance of c5.4xlarge, and the two nodes for the observability stack and K6 ran on dedicated c5.metal instances. We used the metal instances for the observability load generation toolchain to ensure they aren’t resource constrained in any way. Since K6 is very resource demanding when generating a high amount of traffic during tests, we observed that using smaller or less powerful instances for the toolchain caused the observability load generation tools to be a bottleneck for Kong Gateway performance.
Test configuration
For these tests, we changed the number of worker processes to match the number of available cores to the node running Kong Gateway, which was 16 vCPU. Accordingly, we set the number of processes to 16. This follows Kong’s overall performance guidance. Outside of this change, no other tuning was made.
Conduct your own performance test using Kong’s test suite
You can use Kong’s public test suite repo to help you spin up an EKS cluster with Kong Gateway, Redis, Prometheus, and Grafana installed. Additionally, it will configure K6, a popular open source load testing tool. You can use this test suite to conduct your own performance tests.
Once the cluster is generated, you can apply the provided yaml
to configure the Kong Gateway for the included test cases and the observability plugins for metrics scraping by the Prometheus instance already provisioned in the cluster. If you’d rather define your own test scenarios, you can also define the Kong Gateway configuration you want to test and apply it to the cluster.
From there, you can use the included bash scripts to run K6 tests. After the tests complete, you can port-forward
into the cluster and view the Grafana dashboard with the performance results.
More information
- Establish a Kong Gateway performance benchmark: Learn how to optimize Kong Gateway for performance.