Home  >  Article  >  Database  >  nignx load balancing algorithm sharing

nignx load balancing algorithm sharing

小云云
小云云Original
2018-03-22 13:23:321753browse

This article mainly shares the nignx load balancing algorithm with you, hoping to help everyone.

1. Nginx load balancing algorithm

1. Polling (default)

Each request is assigned to a different backend service one by one in chronological order. If the backend is If a server crashes, the faulty system will be automatically eliminated so that user access is not affected.

2. Weight (polling weight)

The larger the value of weight, the higher the access probability assigned to it. It is mainly used when the performance of each server in the backend is unbalanced. Or just set different weights in the master-slave situation to achieve reasonable and effective use of host resources.

3. ip_hash source address hashing method

The idea of ​​source address hashing is to obtain a value calculated through a hash function based on the IP address of the client, and use this value to compare the server list Perform a modulo operation on the size, and the result is the serial number that the client wants to access the server. The source address hash method is used for load balancing. Clients with the same IP address will be mapped to the same backend server for access every time when the backend server list remains unchanged.

4. fair

A more intelligent load balancing algorithm than weight and ip_hash, the fair algorithm can intelligently perform load balancing based on page size and loading time, that is, based on the response of the back-end server Time is allocated to requests, and those with short response times are allocated first. Nginx itself does not support fair. If you need this scheduling algorithm, you must install the upstream_fair module.

5. url_hash

Distribute requests according to the hash result of the accessed URL, so that each URL is directed to a back-end server, which can further improve the efficiency of the back-end cache server. Nginx itself does not support url_hash. If you need this scheduling algorithm, you must install the Nginx hash software package.

1. Polling (default)

Each request is assigned to a different back-end server one by one in chronological order. If the back-end server goes down, it can be automatically eliminated.

2. Weight

Specifies the polling probability, weight is proportional to the access ratio, and is used when the performance of the back-end server is uneven.
For example:

upstream bakend {  
server 192.168.0.14 weight=10;  
server 192.168.0.15 weight=10;  
}

3. ip_hash

Each request is assigned according to the hash result of the accessed IP, so that each visitor has fixed access to a back-end server, which can solve the session problem.
For example:

upstream bakend {  
ip_hash;  
server 192.168.0.14:88;  
server 192.168.0.15:80;  
}

4. fair (third party)

Requests are allocated according to the response time of the back-end server, and those with short response times are allocated first.

upstream backend {  
server server1;  
server server2;  
fair;  
}

5. url_hash (third party)

Distribute requests according to the hash result of the accessed URL, so that each URL is directed to the same back-end server. It is more effective when the back-end server is cached .
Example: Add a hash statement to the upstream. Other parameters such as weight cannot be written in the server statement. hash_method is the hash algorithm used

upstream backend {  
server squid1:3128;  // 10.0.0.10:7777
server squid2:3128;  //10.0.0.11:8888
hash $request_uri;  
hash_method crc32; }

2. Nginx load balancing scheduling status

In the Nginx upstream module, you can set the status of each backend server in load balancing scheduling. Commonly used statuses are:

1. down, indicating that the current server does not participate in load balancing for the time being

2. backup, reserved backup machine. When all other non-backup machines fail or are busy, the backup machine will be requested, so the access pressure of this machine is the lowest

3. max_fails, the number of allowed request failures, defaults to 1, when it exceeds When the maximum number of times is reached, the error defined by the proxy_next_upstream module is returned.

4. fail_timeout, request failure timeout, the time to suspend the service after max_fails failures. max_fails and fail_timeout can be used together.

If Nginx could not only proxy one server, it would not be as popular as it is today. Nginx can be configured to proxy multiple servers. When one server goes down, the system can still be kept available. The specific configuration process is as follows:

1. Under the http node, add the upstream node.

upstream linuxidc { 
      server 10.0.6.108:7080; 
      server 10.0.0.85:8980; 
}

  2.  将server节点下的location节点中的proxy_pass配置为:http:// + upstream名称,即“
http://linuxidc”.

location / { 
            root  html; 
            index  index.html index.htm; 
            proxy_pass http://linuxidc; 
}

    3.  现在负载均衡初步完成了。upstream按照轮询(默认)方式进行负载,每个请求按时间顺序逐一分配到不同的后端服务器,如果后端服务器down掉,能自动剔除。虽然这种方式简便、成本低廉。但缺点是:可靠性低和负载分配不均衡。适用于图片服务器集群和纯静态页面服务器集群。

    除此之外,upstream还有其它的分配策略,分别如下:

    weight(权重)

    指定轮询几率,weight和访问比率成正比,用于后端服务器性能不均的情况。如下所示,10.0.0.88的访问比率要比10.0.0.77的访问比率高一倍。

upstream linuxidc{ 
      server 10.0.0.77 weight=5; 
      server 10.0.0.88 weight=10; 
}

    ip_hash(访问ip)

    每个请求按访问ip的hash结果分配,这样每个访客固定访问一个后端服务器,可以解决session的问题。

upstream favresin{ 
      ip_hash; 
      server 10.0.0.10:8080; 
      server 10.0.0.11:8080; 
}

    fair(第三方)

    按后端服务器的响应时间来分配请求,响应时间短的优先分配。与weight分配策略类似。

 upstream favresin{      
      server 10.0.0.10:8080; 
      server 10.0.0.11:8080; 
      fair; 
}

url_hash(第三方)

按访问url的hash结果来分配请求,使每个url定向到同一个后端服务器,后端服务器为缓存时比较有效。

注意:在upstream中加入hash语句,server语句中不能写入weight等其他的参数,hash_method是使用的hash算法。

 upstream resinserver{ 
      server 10.0.0.10:7777; 
      server 10.0.0.11:8888; 
      hash $request_uri; 
      hash_method crc32; 
}

upstream还可以为每个设备设置状态值,这些状态值的含义分别如下:

down 表示单前的server暂时不参与负载.

weight 默认为1.weight越大,负载的权重就越大。

max_fails :允许请求失败的次数默认为1.当超过最大次数时,返回proxy_next_upstream 模块定义的错误.

fail_timeout : max_fails次失败后,暂停的时间。

backup: 其它所有的非backup机器down或者忙的时候,请求backup机器。所以这台机器压力会最轻。

upstream bakend{ #定义负载均衡设备的Ip及设备状态 
   

  ip_hash; 
      server 10.0.0.11:9090 down; 
      server 10.0.0.11:8080 weight=2; 
      server 10.0.0.11:6060; 
      server 10.0.0.11:7070 backup; 
}

相关推荐:

几种负载均衡技术分享

几种Nginx实现负载均衡的方式

Nginx反向代理和负载均衡实践

The above is the detailed content of nignx load balancing algorithm sharing. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn