Home  >  Article  >  Backend Development  >  nginx application: using nginx for load balancing

nginx application: using nginx for load balancing

不言
不言Original
2018-06-01 15:15:301840browse

This article mainly introduces the nginx application: using nginx for load balancing has a certain reference value. Now I share it with you. Friends in need can refer to it

nginx application: using nginx for load balancing##nginx It can generally be used for seven-layer load balancing. This article will introduce some basic knowledge of load balancing and a simple example of using nginx for load balancing.

Four-layer load balancing vs. seven-layer load balancing

It is often said that seven-layer load balancing or four-layer load balancing is actually decided based on the name of the layer of the ISO OSI network model. , nginx is called seven-layer load balancing because it uses the http protocol to perform load balancing operations at the application layer. For example, LVS that performs load balancing operations on the TCP layer is called layer four load balancing. Generally speaking, there are load balancing classifications as follows:

CategoryOSI Model LayerDescriptionLayer 2 load balancingMAC layerResponse based on MAC addressLayer 3 load BalancingIP layerResponse based on IP addressFour-layer load balancingTCP layer Respond based on the IP address and port numberSeven-layer load balancingHTTP layerOn the basis of four layers, you can continue Further response based on seven layers of information such as URL/browser category
Common software support

Software Four-layer load balancingSeven-layer load balancing##nginxhaproxyLVSF5Common load balancing algorithms
Lightweight implementation Supports http and mail, performance is similar to haproxy
- Supports seven-layer load balancing
Supports four-layer load balancing to achieve heavier -
Hardware implementation , high cost -

Common load balancing algorithms include the following:

Load balancing algorithmNormal pollingWeighted Round RobinRandomly BalanceWeight RandomResponse speedLeast ConnectionDNS responseLoad balancing demonstration Example: Ordinary polling
Load balancing algorithm (E) nginx support or not Explanation Applicable scenarios
Round Robin Supports Polling with the same weight Suitable for scenarios where external service requests and internal servers are relatively balanced
Weighted Round Robin Support (weight) You can set different weights for polling The server has different processing capabilities, or you want to control the flow, such as Canary Release
Random - Randomly allocated to the server When both external and internal are very balanced, or the demand for random allocation is strong
Weighted Random - Randomly assigned to the server in combination with the weight Can be adjusted in combination with the weight Random strategy, better adapted to the distribution situation in reality
Response Time Support (fair) Based on The response speed of the server is allocated The combination of server performance and the current operating status of the server. This strategy can dynamically adjust the status to avoid being assigned a large number of jobs even when the capable ones are no longer able
Least Connection Allocation based on the number of connections Polling is used to allocate tasks, because in actual situations it is impossible to control the allocation of polling task, but the speed at which the task is completed cannot be confirmed, which will lead to differences in the number of connections that reflect the real server load. It is suitable for businesses that provide long-term connection services for a long time, such as the implementation of WebSocket for online customer service, or services such as FTP/SFTP.
Flash DNS - According to the fastest returned DNS Parse the results to continue requesting services, ignoring the IP addresses returned by other DNS Applicable to situations with global load balancing, such as CDN

Next use nginx to demonstrate how to perform ordinary polling:

Load balancing algorithmNormal polling

Preparation in advance

In advance, start two services on the two ports 7001/7002 to display different information. For the convenience of demonstration, use tornado to make a mirror, and pass the parameters passed when the docker container is started. Different is used to display the differences between services.

[root@kong ~]# docker run -d -p 7001:8080 liumiaocn/tornado:latest python /usr/local/bin/daemon.py "User Service 1: 7001"ddba0abd24524d270a782c3fab907f6a35c0ce514eec3159357bded09022ee57
[root@kong ~]# docker run -d -p 7002:8080 liumiaocn/tornado:latest python /usr/local/bin/daemon.py "User Service 1: 7002"95deadd795e19f675891bfcd44e5ea622c95615a95655d1fd346351eca707951
[root@kong ~]# [root@kong ~]# curl http://192.168.163.117:7001Hello, Service :User Service 1: 7001[root@kong ~]# [root@kong ~]# curl http://192.168.163.117:7002Hello, Service :User Service 1: 7002[root@kong ~]#

Start nginx

[root@kong ~]# docker run -p 9080:80 --name nginx-lb -d nginx 9d53c7e9a45ef93e7848eb3f4e51c2652a49681e83bda6337c89a3cf2f379c74
[root@kong ~]# docker ps |grep nginx-lb9d53c7e9a45e        nginx                      "nginx -g 'daemon ..."   11 seconds ago      Up 10 seconds       0.0.0.0:9080->80/tcp                                                 nginx-lb
[root@kong ~]#

nginx code segment

Prepare the following nginx code segment and add it to nginx’s /etc/nginx/conf.d/default.conf

http {
upstream nginx_lb {    server 192.168.163.117:7001;    server 192.168.163.117:7002;
}server {
    listen       80;
    server_name  www.liumiao.cn 192.168.163.117;
    location / {
        proxy_pass http://nginx_lb;
    }

}

How to modify default.conf

You can achieve the effect by installing vim in the container, you can also modify it locally and pass it in through docker cp, or modify it directly with sed. If you install vim in a container, use the following method

[root@kong ~]# docker exec -it nginx-lb sh# apt-get update...省略# apt-get install vim...省略

Before modification

# cat default.confserver {
    listen       80;
    server_name  localhost;    #charset koi8-r;
    #access_log  /var/log/nginx/host.access.log  main;

    location / {
        root   /usr/share/nginx/html;        index  index.html index.htm;
    }    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    #    proxy_pass   http://127.0.0.1;
    #}

    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    #    root           html;
    #    fastcgi_pass   127.0.0.1:9000;
    #    fastcgi_index  index.php;
    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    #    include        fastcgi_params;
    #}

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    #    deny  all;
    #}}#

After modification

# cat default.confupstream nginx_lb {    server 192.168.163.117:7001;    server 192.168.163.117:7002;
}server {
    listen       80;
    server_name  www.liumiao.cn 192.168.163.117;    #charset koi8-r;
    #access_log  /var/log/nginx/host.access.log  main;

    location / {        #root   /usr/share/nginx/html;
        #index  index.html index.htm;
        proxy_pass http://nginx_lb;
    }    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    #    proxy_pass   http://127.0.0.1;
    #}

    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    #    root           html;
    #    fastcgi_pass   127.0.0.1:9000;
    #    fastcgi_index  index.php;
    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    #    include        fastcgi_params;
    #}

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    #    deny  all;
    #}}#

Restart the nginx container

[root@kong ~]# docker restart nginx-lbnginx-lb
[root@kong ~]#

Confirm the result

You can clearly see that polling is performed in order:

[root@kong ~]# curl http://localhost:9080Hello, Service :User Service 1: 7001[root@kong ~]# curl http://localhost:9080Hello, Service :User Service 1: 7002[root@kong ~]# curl http://localhost:9080Hello, Service :User Service 1: 7001[root@kong ~]# curl http://localhost:9080Hello, Service :User Service 1: 7002[root@kong ~]#

Load balancing demonstration example: weight polling

On this basis, weight polling only requires Just add weight

Load balancing Algorithm (E) nginx support or not Explanation Applicable scenarios
Round Robin Supports Polling with the same weight Suitable for scenarios where external service requests and internal servers are relatively balanced
Load balancing algorithm Load balancing algorithm (E) nginx support or not Description Applicable scenarios
Weighted Round Robin Weighted Round Robin Support (weight) You can set different weights for polling The servers have different processing capabilities, or you want to control the flow, such as Canary Release

Modify default.conf

Modify default.conf as follows

# cp default.conf default.conf.org
# vi default.conf
# diff default.conf default.conf.org
2,3c2,3
<     server 192.168.163.117:7001 weight=100;<     server 192.168.163.117:7002 weight=200;
--->     server 192.168.163.117:7001;
>     server 192.168.163.117:7002;
#

Restart nginx container

[root@kong ~]# docker restart nginx-lbnginx-lb
[root@kong ~]#

Confirm the result

You can see the polling results as follows The proportions of 1/3 and 2/3 are being carried out:

[root@kong ~]# curl http://localhost:9080Hello, Service :User Service 1: 7001[root@kong ~]# curl http://localhost:9080Hello, Service :User Service 1: 7002[root@kong ~]# curl http://localhost:9080Hello, Service :User Service 1: 7002[root@kong ~]#

Related recommendations:

nginx management configuration optimization

Nginx reverse Configure instance to proxy websocket

The above is the detailed content of nginx application: using nginx for load balancing. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn