Home >Backend Development >PHP Tutorial >How does Nginx do load balancing? Summary of nginx load balancing algorithm (with code)
How to do Nginx load balancing? In fact, there are many ways to implement Nginx load balancing. Let me introduce the Nginx load balancing algorithm in detail, including five algorithms: polling, weight, ip_hash, fair and url_hash.
1. Nginx load balancing algorithm
1. Polling (default)
Each request is assigned to a different backend one by one in chronological order Service, if a back-end server crashes, the faulty system will be automatically eliminated so that user access is not affected.
2. Weight (polling weight)
The larger the value of weight, the higher the access probability assigned to it. It is mainly used when the performance of each server in the backend is unbalanced. Or just set different weights in the master-slave situation to achieve reasonable and effective utilization of host resources.
3. ip_hash
Each request is allocated according to the hash result of the access IP, so that visitors from the same IP can have fixed access to a back-end server, and can effectively solve the problem of dynamic web pages. Session sharing issue.
4. fair
A more intelligent load balancing algorithm than weight and ip_hash, the fair algorithm can intelligently perform load balancing based on page size and loading time, that is, based on the response of the back-end server Time is allocated to requests, and those with short response times are allocated first. Nginx itself does not support fair. If you need this scheduling algorithm, you must install the upstream_fair module.
5. url_hash
Distribute requests according to the hash result of the accessed URL, so that each URL is directed to a back-end server, which can further improve the efficiency of the back-end cache server. Nginx itself does not support url_hash. If you need this scheduling algorithm, you must install the Nginx hash software package.
1. Polling (default)
Each request is assigned to a different back-end server one by one in chronological order. If the back-end server goes down, it can be automatically eliminated. .
2.weight
Specify the polling probability, weight is proportional to the access ratio, and is used when the back-end server performance is uneven.
For example:
upstream bakend { server 192.168.0.14 weight=10; server 192.168.0.15 weight=10; }
3.ip_hash
Each request is allocated according to the hash result of the accessed IP, so that each visitor has access to one backend. The server can solve session problems.
For example:
upstream bakend { ip_hash; server 192.168.0.14:88; server 192.168.0.15:80; }
4.fair (third party)
Requests are allocated according to the response time of the backend server, with shorter response times given priority distribute.
upstream backend { server server1; server server2; fair; }
5.url_hash (third party)
Distribute requests according to the hash result of the accessed URL, so that each URL is directed to the same back-end server , more effective when the backend server is cached.
Example: Add a hash statement to the upstream. Other parameters such as weight cannot be written in the server statement. hash_method is the hash algorithm used.
upstream backend { server squid1:3128; server squid2:3128; hash $request_uri; hash_method crc32;
2. Nginx load balancing scheduling status
In the Nginx upstream module, you can set the status of each backend server in load balancing scheduling. Commonly used statuses are:
1. down, indicating the current server Not participating in load balancing for the time being
2. Backup, reserved backup machine. When all other non-backup machines fail or are busy, the backup machine will be requested, so the access pressure of this machine is the lowest
3. max_fails, the number of allowed request failures, defaults to 1, when it exceeds When the maximum number of times is reached, the error defined by the proxy_next_upstream module is returned.
4. fail_timeout, request failure timeout, the time to suspend the service after max_fails failures. max_fails and fail_timeout can be used together.
If Nginx could not only proxy one server, it would not be as popular as it is today. Nginx can be configured to proxy multiple servers. When one server goes down, the system can still be kept available. The specific configuration process is as follows:
1. Under the http node, add the upstream node.
upstream linuxidc { server 10.0.6.108:7080; server 10.0.0.85:8980; }
2. Configure the proxy_pass in the location node under the server node as: http:// upstream name, that is, "
http://linuxidc".
location / { root html; index index.html index.htm; proxy_pass http://linuxidc; }
3. Load balancing is now initially completed. Upstream loads according to the polling (default) method. Each request is assigned to different back-end servers one by one in chronological order. If the back-end server goes down, it can be automatically eliminated. Although this method is simple and low-cost. But the disadvantages are: low reliability and uneven load distribution. Applicable to image server clusters and purely static page server clusters.
In addition, upstream has other distribution strategies, as follows:
Weight (weight)
指定轮询几率,weight和访问比率成正比,用于后端服务器性能不均的情况。如下所示,10.0.0.88的访问比率要比10.0.0.77的访问比率高一倍。
upstream linuxidc{ server 10.0.0.77 weight=5; server 10.0.0.88 weight=10; }
ip_hash(访问ip)
每个请求按访问ip的hash结果分配,这样每个访客固定访问一个后端服务器,可以解决session的问题。
upstream favresin{ ip_hash; server 10.0.0.10:8080; server 10.0.0.11:8080; }
fair(第三方)
按后端服务器的响应时间来分配请求,响应时间短的优先分配。与weight分配策略类似。
upstream favresin{ server 10.0.0.10:8080; server 10.0.0.11:8080; fair; }
url_hash(第三方)
按访问url的hash结果来分配请求,使每个url定向到同一个后端服务器,后端服务器为缓存时比较有效。
注意:在upstream中加入hash语句,server语句中不能写入weight等其他的参数,hash_method是使用的hash算法。
upstream resinserver{ server 10.0.0.10:7777; server 10.0.0.11:8888; hash $request_uri; hash_method crc32; }
upstream还可以为每个设备设置状态值,这些状态值的含义分别如下:
down 表示单前的server暂时不参与负载.
weight 默认为1.weight越大,负载的权重就越大。
max_fails :允许请求失败的次数默认为1.当超过最大次数时,返回proxy_next_upstream 模块定义的错误.
fail_timeout : max_fails次失败后,暂停的时间。
backup: 其它所有的非backup机器down或者忙的时候,请求backup机器。所以这台机器压力会最轻。
upstream bakend{ #定义负载均衡设备的Ip及设备状态 ip_hash; server 10.0.0.11:9090 down; server 10.0.0.11:8080 weight=2; server 10.0.0.11:6060; server 10.0.0.11:7070 backup; }
如果Nginx没有仅仅只能代理一台服务器的话,那它也不可能像今天这么火,Nginx可以配置代理多台服务器,当一台服务器宕机之后,仍能保持系统可用。具体配置过程如下:
1. 在http节点下,添加upstream节点。
upstream linuxidc { server 10.0.6.108:7080; server 10.0.0.85:8980; }
2. 将server节点下的location节点中的proxy_pass配置为:http:// + upstream名称,即“
http://linuxidc”.
location / { root html; index index.html index.htm; proxy_pass http://linuxidc; }
3. 现在负载均衡初步完成了。upstream按照轮询(默认)方式进行负载,每个请求按时间顺序逐一分配到不同的后端服务器,如果后端服务器down掉,能自动剔除。虽然这种方式简便、成本低廉。但缺点是:可靠性低和负载分配不均衡。适用于图片服务器集群和纯静态页面服务器集群。
除此之外,upstream还有其它的分配策略,分别如下:
weight(权重)
指定轮询几率,weight和访问比率成正比,用于后端服务器性能不均的情况。如下所示,10.0.0.88的访问比率要比10.0.0.77的访问比率高一倍。
upstream linuxidc{ server 10.0.0.77 weight=5; server 10.0.0.88 weight=10; }
ip_hash(访问ip)
每个请求按访问ip的hash结果分配,这样每个访客固定访问一个后端服务器,可以解决session的问题。
upstream favresin{ ip_hash; server 10.0.0.10:8080; server 10.0.0.11:8080; }
fair(第三方)
按后端服务器的响应时间来分配请求,响应时间短的优先分配。与weight分配策略类似。
upstream favresin{ server 10.0.0.10:8080; server 10.0.0.11:8080; fair; }
url_hash(第三方)
按访问url的hash结果来分配请求,使每个url定向到同一个后端服务器,后端服务器为缓存时比较有效。
注意:在upstream中加入hash语句,server语句中不能写入weight等其他的参数,hash_method是使用的hash算法。
upstream resinserver{ server 10.0.0.10:7777; server 10.0.0.11:8888; hash $request_uri; hash_method crc32; }
upstream还可以为每个设备设置状态值,这些状态值的含义分别如下:
down 表示单前的server暂时不参与负载.
weight 默认为1.weight越大,负载的权重就越大。
max_fails :允许请求失败的次数默认为1.当超过最大次数时,返回proxy_next_upstream 模块定义的错误.
fail_timeout : max_fails次失败后,暂停的时间。
backup: 其它所有的非backup机器down或者忙的时候,请求backup机器。所以这台机器压力会最轻。
upstream bakend{ #定义负载均衡设备的Ip及设备状态 ip_hash; server 10.0.0.11:9090 down; server 10.0.0.11:8080 weight=2; server 10.0.0.11:6060; server 10.0.0.11:7070 backup; }
相关推荐:
The above is the detailed content of How does Nginx do load balancing? Summary of nginx load balancing algorithm (with code). For more information, please follow other related articles on the PHP Chinese website!