Configure Rate Limiting
Rate limiting allows you to restrict the number of requests a client can make to your APIs within a specific time window. This protects your backend services from being overwhelmed by too many requests and ensures fair usage among all users.
API7 Enterprise provides two plugins for rate limiting:
limit-count: A simple rate limiting plugin that counts requests within a fixed time window.limit-count-advanced: An advanced version that supports sliding windows, multiple rules, and Redis Sentinel.
Prerequisites
- An API7 Enterprise instance is running.
- A Gateway Group is created and a Gateway instance is running.
- A token from the Dashboard.
Configure Basic Rate Limiting
The limit-count plugin is ideal for simple rate limiting scenarios, such as limiting each IP address to 5 requests per minute.
- Admin API
- ADC
curl -k "https://localhost:7443/apisix/admin/services/rate-limit-service?gateway_group_id={group_id}" -X PUT \
-H "X-API-KEY: ${API_KEY}" \
-H "Content-Type: application/json" \
-d '{
"name": "rate-limit-service",
"upstream": {
"type": "roundrobin",
"scheme": "http",
"nodes": [
{
"host": "httpbin.org",
"port": 80,
"weight": 100
}
]
}
}'
curl -k "https://localhost:7443/apisix/admin/routes/rate-limit-route?gateway_group_id={group_id}" -X PUT \
-H "X-API-KEY: ${API_KEY}" \
-H "Content-Type: application/json" \
-d '{
"name": "rate-limit-route",
"methods": ["GET"],
"paths": ["/get"],
"service_id": "rate-limit-service",
"plugins": {
"limit-count": {
"count": 5,
"time_window": 60,
"rejected_code": 429,
"rejected_msg": "Too Many Requests",
"key": "remote_addr",
"policy": "local"
}
}
}'
services:
- name: rate-limit-service
upstream:
nodes:
- host: httpbin.org
port: 80
weight: 1
routes:
- name: rate-limit-route
uris:
- /get
plugins:
limit-count:
count: 5
time_window: 60
rejected_code: 429
rejected_msg: "Too Many Requests"
key_type: var
key: remote_addr
policy: local
adc sync -f adc.yaml
Configure Distributed Rate Limiting
In a multi-node Gateway deployment, use the redis policy to share the rate limit counters across all nodes.
- Admin API
- ADC
curl -k "https://localhost:7443/apisix/admin/services/distributed-rate-limit-service?gateway_group_id={group_id}" -X PUT \
-H "X-API-KEY: ${API_KEY}" \
-H "Content-Type: application/json" \
-d '{
"name": "distributed-rate-limit-service",
"upstream": {
"type": "roundrobin",
"scheme": "http",
"nodes": [
{
"host": "httpbin.org",
"port": 80,
"weight": 100
}
]
}
}'
curl -k "https://localhost:7443/apisix/admin/routes/distributed-rate-limit-route?gateway_group_id={group_id}" -X PUT \
-H "X-API-KEY: ${API_KEY}" \
-H "Content-Type: application/json" \
-d '{
"name": "distributed-rate-limit-route",
"methods": ["GET"],
"paths": ["/get"],
"service_id": "distributed-rate-limit-service",
"plugins": {
"limit-count": {
"count": 100,
"time_window": 60,
"policy": "redis",
"redis_host": "127.0.0.1",
"redis_port": 6379,
"redis_password": "your-password",
"redis_database": 0,
"allow_degradation": true
}
}
}'
services:
- name: distributed-rate-limit-service
upstream:
nodes:
- host: httpbin.org
port: 80
weight: 1
routes:
- name: distributed-rate-limit-route
uris:
- /get
plugins:
limit-count:
count: 100
time_window: 60
policy: redis
redis_host: "127.0.0.1"
redis_port: 6379
redis_password: "your-password"
redis_database: 0
allow_degradation: true
adc sync -f adc.yaml
Set allow_degradation to true to allow traffic even if the Redis server is unreachable.
Configure Advanced Rate Limiting
The limit-count-advanced plugin supports a sliding window algorithm, which prevents traffic bursts that can occur at the boundary of fixed windows. It also allows defining multiple rules in a single plugin instance.
- Admin API
- ADC
curl -k "https://localhost:7443/apisix/admin/services/advanced-rate-limit-service?gateway_group_id={group_id}" -X PUT \
-H "X-API-KEY: ${API_KEY}" \
-H "Content-Type: application/json" \
-d '{
"name": "advanced-rate-limit-service",
"upstream": {
"type": "roundrobin",
"scheme": "http",
"nodes": [
{
"host": "httpbin.org",
"port": 80,
"weight": 100
}
]
}
}'
curl -k "https://localhost:7443/apisix/admin/routes/advanced-rate-limit-route?gateway_group_id={group_id}" -X PUT \
-H "X-API-KEY: ${API_KEY}" \
-H "Content-Type: application/json" \
-d '{
"name": "advanced-rate-limit-route",
"methods": ["GET"],
"paths": ["/get"],
"service_id": "advanced-rate-limit-service",
"plugins": {
"limit-count-advanced": {
"window_type": "sliding",
"rules": [
{
"count": 10,
"time_window": 1,
"key": "remote_addr"
},
{
"count": 1000,
"time_window": 3600,
"key": "remote_addr"
}
]
}
}
}'
services:
- name: advanced-rate-limit-service
upstream:
nodes:
- host: httpbin.org
port: 80
weight: 1
routes:
- name: advanced-rate-limit-route
uris:
- /get
plugins:
limit-count-advanced:
window_type: sliding
rules:
- count: 10
time_window: 1
key: remote_addr
- count: 1000
time_window: 3600
key: remote_addr
adc sync -f adc.yaml
In this example, a client is limited to 10 requests per second and 1,000 requests per hour.
Validate the Configuration
Send multiple requests to the configured route to trigger the rate limit:
for i in {1..6}; do curl -I "http://127.0.0.1:9080/get"; done
When the limit is reached, you will receive a 429 Too Many Requests (or your custom rejected_code) response.
If the route returns 404 immediately after you apply the configuration, wait a few seconds for the latest configuration to reach the gateway and retry.
The response will also include X-RateLimit headers if show_limit_quota_header is enabled (default is true):
HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 5
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 45
Troubleshooting
- Redis Connection Issues: Ensure the Gateway nodes have network access to the Redis server and that the credentials are correct. Check the Gateway logs for connection errors.
- Wrong
key_type: If you use a variable for thekey, ensurekey_typeis set tovar. If you use a constant string, set it toconstant. - Latency with Redis: If Redis latency is affecting request performance, consider using the
localpolicy for high-frequency low-latency limits, or tune thesync_intervalfor redis-based policies.
Next Steps
- Implement Traffic Splitting — route a percentage of traffic to new versions.
- Configure Proxy Rewrite — modify request URIs and headers before proxying.