Home > Article > Operation and Maintenance > How to configure Nginx current limiting
Empty Bucket
We start with the simplest current limiting configuration:
limit_req_zone $binary_remote_addr zone=ip_limit:10m rate=10r/s; server { location /login/ { limit_req zone=ip_limit; proxy_pass http://login_upstream; } }
$binary_remote_addr for client ip limit Flow;
zone=ip_limit:10m The name of the current limiting rule is ip_limit, which allows the use of 10mb of memory space to record the current limiting status corresponding to the IP;
rate=10r/s The current limit speed is 10 requests per second
location /login/ Limit the login flow
The current limit speed is 10 requests per second. If 10 requests arrive at an idle nginx at the same time, can they all be executed?
Leaky bucket leak requests are uniform. How is 10r/s a constant speed? One request is leaked every 100ms.
Under this configuration, the bucket is empty, and all requests that cannot be leaked in real time will be rejected.
So if 10 requests arrive at the same time, only one request can be executed, and the others will be rejected.
This is not very friendly. In most business scenarios, we hope that these 10 requests can be executed.
#burst
Let’s change the configuration to solve the problem in the previous section
limit_req_zone $binary_remote_addr zone=ip_limit:10m rate=10r/s; server { location /login/ { limit_req zone=ip_limit burst=12; proxy_pass http://login_upstream; } }
burst=12 The size of the leaky bucket is set to 12
# Logically called a leaky bucket, it is implemented as a fifo queue, which temporarily caches requests that cannot be executed.
The leakage speed is still 100ms per request, but requests that come concurrently and cannot be executed temporarily can be cached first. Only when the queue is full will new requests be refused.
In this way, the leaky bucket not only limits the current, but also plays the role of peak shaving and valley filling.
Under such a configuration, if 10 requests arrive at the same time, they will be executed in sequence, one every 100ms.
Although it was executed, the delay was greatly increased due to queuing execution, which is still unacceptable in many scenarios.
nodelay
Continue to modify the configuration to solve the problem of increased delay caused by too long delay
limit_req_zone $binary_remote_addr zone=ip_limit:10m rate=10r/s; server { location /login/ { limit_req zone=ip_limit burst=12 nodelay; proxy_pass http://login_upstream; } }
nodelay Advance the time to start executing requests , it used to be delayed until it leaked out of the bucket, but now there is no delay. It will start executing as soon as it enters the bucket
Either it will be executed immediately, or it will be rejected. Requests will no longer be delayed due to throttling.
Because requests leak out of the bucket at a constant rate, and the bucket space is fixed, in the end, on average, 5 requests are executed per second, and the purpose of current limiting is still achieved.
But this also has disadvantages. The current limit is limited, but the limit is not so uniform. Taking the above configuration as an example, if 12 requests arrive at the same time, then these 12 requests can be executed immediately, and subsequent requests can only be entered into the bucket at a constant speed, and one request will be executed every 100ms. If there are no requests for a period of time and the bucket is empty, 12 concurrent requests may be executed at the same time.
In most cases, this uneven flow rate is not a big problem. However, nginx also provides a parameter to control concurrent execution, which is the number of nodelay requests.
limit_req_zone $binary_remote_addr zone=ip_limit:10m rate=10r/s; server { location /login/ { limit_req zone=ip_limit burst=12 delay=4; proxy_pass http://login_upstream; } }
delay=4 Start delay from the 5th request in the bucket
In this way, by controlling the value of the delay parameter, you can adjust the allowed concurrent execution The number of requests makes the requests even. It is still necessary to control this number on some resource-consuming services.
The above is the detailed content of How to configure Nginx current limiting. For more information, please follow other related articles on the PHP Chinese website!