Home > Article > Operation and Maintenance > How to load balance nginx
1. Nginx reverse proxy
Before introducing the load balancing of nginx, we first introduce the reverse proxy of nginx. Because reverse proxies are commonly used, we will not introduce the forward proxy here.
nginx’s proxy process is to send the request to nginx, and then forward the request to the back-end server, and the back-end server completes the processing Then the results are sent to nginx, and nginx sends the results to the client. The backend server can be remote or local, or it can be another virtual host defined within the nginx server. These servers that receive nginx forwarding are called upstream
One of the purposes of using nginx as a proxy is to expand the scale of the infrastructure. nginx can handle a large number of concurrent connections. After a request arrives, nginx can forward it to any number of background servers for processing, which is equivalent to distributing load balancing to the entire cluster.
Syntax: proxy_pass URL
Explanation: The URL can be in the following form: http://location:8000/uri/, etc., which can be found at Configure in location.
Example: Let’s write a simple reverse proxy:
There is no test_proxy file in the directory where I listen on port 80, but In my 8080 port listening directory, I added the following content to the server on port 80:
location ~ /test_proxy.html$ {proxy_pass http://127.0.0.1:8080;}
Then enter http://IP address/test_proxy.html on the browser, and the requested information will appear. In fact, port 80 is forwarded to port 8080 and the data is requested back.
2. Buffering
nginx also provides a buffering mechanism to improve performance. Without buffering, data is sent directly from the backend server to the client. The function of buffering is to temporarily store the processing results from the back-end server on nginx, so that the connection from nginx to the back-end can be closed early and reduce IO losses. Generally, content is stored in memory, but when there is too much content and insufficient memory, the content will be stored in a temporary file directory. The following are some commonly used buffering configuration items, which can be found under the http, server and location content blocks.
proxy_buffering: Controls whether buffering is enabled under this content block. The default is "on".
proxy_buffers: There are two parameters, the first controls the number of buffer requests, and the second controls the buffer size. The default value is 8, one page (usually 4k or 8k). The larger the value, the more content is buffered.
proxy_buffer_size: The first segment of the backend reply result (the part including the header) is buffered separately. This configuration is to configure the size of this part of the buffer. This value defaults to the same value as proxy_buffer. We can set it smaller because header content is generally less.
proxy_busy_buffers_size: Sets the size of buffers marked as "client-ready". The client can only read data from one buffer at a time, and the buffer is sent to the client in batches according to queue order. This syntax configures the size of this queue.
proxy_temp_path: Define the path where nginx stores temporary files.
proxy_max_temp_file_size: The size of the directory that can store temporary files per request. If the results sent from upstream are too large to fit into a buffer, nginx will create temporary files for them.
3. Load balancing
Configuration syntax: upstream name {.......}
Explanation: name is a custom name, and {} is the content that needs to be defined. It can only be defined in the http block, not the server block. After defining it, you can write the following code under the location block to call: http://name.
Example: Due to the limitation of the number of servers, here we use different ports of one server to simulate load balancing. Of course, the configurations of multiple servers are also similar.
Add the following code in the server block:
## upstream test {#ip_hash server IP:8001; server IP:8002; server IP:8003; }
Then, we add the following content to the location within the http block:
location / {#Set the host header and the client’s real address so that the server can obtain the client’s real IPproxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_connect_timeout 30;#Set the connection timeout to 30sproxy_send_timeout 60; proxy_read_timeout 60; proxy_buffer_size 32k; #Set buffer size proxy_buffering on; #Open buffer proxy_buffers 4 128k; #Set the number and size of buffers proxy_busy_buffers_size 256k; #Set the client ready buffer size proxy_max_temp_file_size 256k; proxy_pass http://test; #Calling the load balancing set above }
Of course, we have to remind you that the IP we set The port is configured in advance in the configuration file. For example, the following configuration.
Then, we access our host address, and then refresh it continuously, and we will find the page information coming out of each port. The default load balancing uses polling.
If you are using different servers for load balancing, you only need to make slight changes, such as the following configuration:
upstream mydomain.com{server 47.95.242.167:80; server 47.95.242.168:80; server 47.95.242.169:80; }
After the configuration, the remaining code in http is similar to the above, and then in the other three servers Make the following configuration there. Of course, we still have to set up the firewalls on those three servers.
##server{listen 80; server_name www.mydomain.com; index index.htm index.php index.html; root directory path;}
①The status of the backend server in load balancing scheduling
dowm:current The server does not participate in load balancing for the time being.
backup: Reserved backup server.
max_fails: The number of allowed request failures.
fail_timeout: The time the server pauses after max_fails fails.
max_conns: Limit the maximum number of receiving connections.
Note: The above configuration is configured during upstream. For example, the server IP: 8001 dowm added in {} means that this service is It does not participate in load balancing and is used for backup. The above configurations are written behind the service.
②Scheduling algorithm
Polling: assigned to different backend servers one by one in chronological order.
Weighted polling: You can add weight=number after the configured server. The higher the number value, the greater the probability of allocation.
ip_hash: Each request is allocated according to the hash of the access IP, so that access from the same IP to a backend server is fixed.
least_hash: The minimum number of links, whichever machine has the least number of connections will be distributed to that machine.
url_hash: Distribute requests according to the hash result of the accessed URL, and direct each URL to the same backend server.
hash key value: hash custom key.
Note: The scheduling algorithm is configured in the upstream settings. For example, writing ip_hash in the curly brackets means using ip_hash to allocate
More For more Nginx related technical articles, please visit the Nginx Tutorial column to learn!
The above is the detailed content of How to load balance nginx. For more information, please follow other related articles on the PHP Chinese website!