search
HomeBackend DevelopmentPHP Tutorialnginx application: using nginx for load balancing

nginx application: using nginx for load balancing

Jun 01, 2018 pm 03:15 PM
nginxapplicationconduct

This article mainly introduces the nginx application: using nginx for load balancing has a certain reference value. Now I share it with you. Friends in need can refer to it

nginx application: using nginx for load balancing##nginx It can generally be used for seven-layer load balancing. This article will introduce some basic knowledge of load balancing and a simple example of using nginx for load balancing.

Four-layer load balancing vs. seven-layer load balancing

It is often said that seven-layer load balancing or four-layer load balancing is actually decided based on the name of the layer of the ISO OSI network model. , nginx is called seven-layer load balancing because it uses the http protocol to perform load balancing operations at the application layer. For example, LVS that performs load balancing operations on the TCP layer is called layer four load balancing. Generally speaking, there are load balancing classifications as follows:

CategoryOSI Model LayerDescriptionLayer 2 load balancingMAC layerResponse based on MAC addressLayer 3 load BalancingIP layerResponse based on IP addressFour-layer load balancingTCP layer Respond based on the IP address and port numberSeven-layer load balancingHTTP layerOn the basis of four layers, you can continue Further response based on seven layers of information such as URL/browser category
Common software support

Software Four-layer load balancingSeven-layer load balancing##nginxhaproxyLVSF5Common load balancing algorithms
Lightweight implementation Supports http and mail, performance is similar to haproxy
- Supports seven-layer load balancing
Supports four-layer load balancing to achieve heavier -
Hardware implementation , high cost -

Common load balancing algorithms include the following:

Load balancing algorithmNormal pollingWeighted Round RobinRandomly BalanceWeight RandomResponse speedLeast ConnectionDNS responseLoad balancing demonstration Example: Ordinary polling
Load balancing algorithm (E) nginx support or not Explanation Applicable scenarios
Round Robin Supports Polling with the same weight Suitable for scenarios where external service requests and internal servers are relatively balanced
Weighted Round Robin Support (weight) You can set different weights for polling The server has different processing capabilities, or you want to control the flow, such as Canary Release
Random - Randomly allocated to the server When both external and internal are very balanced, or the demand for random allocation is strong
Weighted Random - Randomly assigned to the server in combination with the weight Can be adjusted in combination with the weight Random strategy, better adapted to the distribution situation in reality
Response Time Support (fair) Based on The response speed of the server is allocated The combination of server performance and the current operating status of the server. This strategy can dynamically adjust the status to avoid being assigned a large number of jobs even when the capable ones are no longer able
Least Connection Allocation based on the number of connections Polling is used to allocate tasks, because in actual situations it is impossible to control the allocation of polling task, but the speed at which the task is completed cannot be confirmed, which will lead to differences in the number of connections that reflect the real server load. It is suitable for businesses that provide long-term connection services for a long time, such as the implementation of WebSocket for online customer service, or services such as FTP/SFTP.
Flash DNS - According to the fastest returned DNS Parse the results to continue requesting services, ignoring the IP addresses returned by other DNS Applicable to situations with global load balancing, such as CDN

Next use nginx to demonstrate how to perform ordinary polling:

Load balancing algorithmNormal polling

Preparation in advance

In advance, start two services on the two ports 7001/7002 to display different information. For the convenience of demonstration, use tornado to make a mirror, and pass the parameters passed when the docker container is started. Different is used to display the differences between services.

[root@kong ~]# docker run -d -p 7001:8080 liumiaocn/tornado:latest python /usr/local/bin/daemon.py "User Service 1: 7001"ddba0abd24524d270a782c3fab907f6a35c0ce514eec3159357bded09022ee57
[root@kong ~]# docker run -d -p 7002:8080 liumiaocn/tornado:latest python /usr/local/bin/daemon.py "User Service 1: 7002"95deadd795e19f675891bfcd44e5ea622c95615a95655d1fd346351eca707951
[root@kong ~]# [root@kong ~]# curl http://192.168.163.117:7001Hello, Service :User Service 1: 7001[root@kong ~]# [root@kong ~]# curl http://192.168.163.117:7002Hello, Service :User Service 1: 7002[root@kong ~]#

Start nginx

[root@kong ~]# docker run -p 9080:80 --name nginx-lb -d nginx 9d53c7e9a45ef93e7848eb3f4e51c2652a49681e83bda6337c89a3cf2f379c74
[root@kong ~]# docker ps |grep nginx-lb9d53c7e9a45e        nginx                      "nginx -g 'daemon ..."   11 seconds ago      Up 10 seconds       0.0.0.0:9080->80/tcp                                                 nginx-lb
[root@kong ~]#

nginx code segment

Prepare the following nginx code segment and add it to nginx’s /etc/nginx/conf.d/default.conf

http {
upstream nginx_lb {    server 192.168.163.117:7001;    server 192.168.163.117:7002;
}server {
    listen       80;
    server_name  www.liumiao.cn 192.168.163.117;
    location / {
        proxy_pass http://nginx_lb;
    }

}

How to modify default.conf

You can achieve the effect by installing vim in the container, you can also modify it locally and pass it in through docker cp, or modify it directly with sed. If you install vim in a container, use the following method

[root@kong ~]# docker exec -it nginx-lb sh# apt-get update...省略# apt-get install vim...省略

Before modification

# cat default.confserver {
    listen       80;
    server_name  localhost;    #charset koi8-r;
    #access_log  /var/log/nginx/host.access.log  main;

    location / {
        root   /usr/share/nginx/html;        index  index.html index.htm;
    }    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    #    proxy_pass   http://127.0.0.1;
    #}

    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    #    root           html;
    #    fastcgi_pass   127.0.0.1:9000;
    #    fastcgi_index  index.php;
    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    #    include        fastcgi_params;
    #}

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    #    deny  all;
    #}}#

After modification

# cat default.confupstream nginx_lb {    server 192.168.163.117:7001;    server 192.168.163.117:7002;
}server {
    listen       80;
    server_name  www.liumiao.cn 192.168.163.117;    #charset koi8-r;
    #access_log  /var/log/nginx/host.access.log  main;

    location / {        #root   /usr/share/nginx/html;
        #index  index.html index.htm;
        proxy_pass http://nginx_lb;
    }    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    #    proxy_pass   http://127.0.0.1;
    #}

    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    #    root           html;
    #    fastcgi_pass   127.0.0.1:9000;
    #    fastcgi_index  index.php;
    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    #    include        fastcgi_params;
    #}

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    #    deny  all;
    #}}#

Restart the nginx container

[root@kong ~]# docker restart nginx-lbnginx-lb
[root@kong ~]#

Confirm the result

You can clearly see that polling is performed in order:

[root@kong ~]# curl http://localhost:9080Hello, Service :User Service 1: 7001[root@kong ~]# curl http://localhost:9080Hello, Service :User Service 1: 7002[root@kong ~]# curl http://localhost:9080Hello, Service :User Service 1: 7001[root@kong ~]# curl http://localhost:9080Hello, Service :User Service 1: 7002[root@kong ~]#

Load balancing demonstration example: weight polling

On this basis, weight polling only requires Just add weight

Load balancing Algorithm (E) nginx support or not Explanation Applicable scenarios
Round Robin Supports Polling with the same weight Suitable for scenarios where external service requests and internal servers are relatively balanced
Load balancing algorithm Load balancing algorithm (E) nginx support or not Description Applicable scenarios
Weighted Round Robin Weighted Round Robin Support (weight) You can set different weights for polling The servers have different processing capabilities, or you want to control the flow, such as Canary Release

Modify default.conf

Modify default.conf as follows

# cp default.conf default.conf.org
# vi default.conf
# diff default.conf default.conf.org
2,3c2,3
<     server 192.168.163.117:7001 weight=100;<     server 192.168.163.117:7002 weight=200;
--->     server 192.168.163.117:7001;
>     server 192.168.163.117:7002;
#

Restart nginx container

[root@kong ~]# docker restart nginx-lbnginx-lb
[root@kong ~]#

Confirm the result

You can see the polling results as follows The proportions of 1/3 and 2/3 are being carried out:

[root@kong ~]# curl http://localhost:9080Hello, Service :User Service 1: 7001[root@kong ~]# curl http://localhost:9080Hello, Service :User Service 1: 7002[root@kong ~]# curl http://localhost:9080Hello, Service :User Service 1: 7002[root@kong ~]#

Related recommendations:

nginx management configuration optimization

Nginx reverse Configure instance to proxy websocket

The above is the detailed content of nginx application: using nginx for load balancing. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
PHP Performance Tuning for High Traffic WebsitesPHP Performance Tuning for High Traffic WebsitesMay 14, 2025 am 12:13 AM

ThesecrettokeepingaPHP-poweredwebsiterunningsmoothlyunderheavyloadinvolvesseveralkeystrategies:1)ImplementopcodecachingwithOPcachetoreducescriptexecutiontime,2)UsedatabasequerycachingwithRedistolessendatabaseload,3)LeverageCDNslikeCloudflareforservin

Dependency Injection in PHP: Code Examples for BeginnersDependency Injection in PHP: Code Examples for BeginnersMay 14, 2025 am 12:08 AM

You should care about DependencyInjection(DI) because it makes your code clearer and easier to maintain. 1) DI makes it more modular by decoupling classes, 2) improves the convenience of testing and code flexibility, 3) Use DI containers to manage complex dependencies, but pay attention to performance impact and circular dependencies, 4) The best practice is to rely on abstract interfaces to achieve loose coupling.

PHP Performance: is it possible to optimize the application?PHP Performance: is it possible to optimize the application?May 14, 2025 am 12:04 AM

Yes,optimizingaPHPapplicationispossibleandessential.1)ImplementcachingusingAPCutoreducedatabaseload.2)Optimizedatabaseswithindexing,efficientqueries,andconnectionpooling.3)Enhancecodewithbuilt-infunctions,avoidingglobalvariables,andusingopcodecaching

PHP Performance Optimization: The Ultimate GuidePHP Performance Optimization: The Ultimate GuideMay 14, 2025 am 12:02 AM

ThekeystrategiestosignificantlyboostPHPapplicationperformanceare:1)UseopcodecachinglikeOPcachetoreduceexecutiontime,2)Optimizedatabaseinteractionswithpreparedstatementsandproperindexing,3)ConfigurewebserverslikeNginxwithPHP-FPMforbetterperformance,4)

PHP Dependency Injection Container: A Quick StartPHP Dependency Injection Container: A Quick StartMay 13, 2025 am 12:11 AM

APHPDependencyInjectionContainerisatoolthatmanagesclassdependencies,enhancingcodemodularity,testability,andmaintainability.Itactsasacentralhubforcreatingandinjectingdependencies,thusreducingtightcouplingandeasingunittesting.

Dependency Injection vs. Service Locator in PHPDependency Injection vs. Service Locator in PHPMay 13, 2025 am 12:10 AM

Select DependencyInjection (DI) for large applications, ServiceLocator is suitable for small projects or prototypes. 1) DI improves the testability and modularity of the code through constructor injection. 2) ServiceLocator obtains services through center registration, which is convenient but may lead to an increase in code coupling.

PHP performance optimization strategies.PHP performance optimization strategies.May 13, 2025 am 12:06 AM

PHPapplicationscanbeoptimizedforspeedandefficiencyby:1)enablingopcacheinphp.ini,2)usingpreparedstatementswithPDOfordatabasequeries,3)replacingloopswitharray_filterandarray_mapfordataprocessing,4)configuringNginxasareverseproxy,5)implementingcachingwi

PHP Email Validation: Ensuring Emails Are Sent CorrectlyPHP Email Validation: Ensuring Emails Are Sent CorrectlyMay 13, 2025 am 12:06 AM

PHPemailvalidationinvolvesthreesteps:1)Formatvalidationusingregularexpressionstochecktheemailformat;2)DNSvalidationtoensurethedomainhasavalidMXrecord;3)SMTPvalidation,themostthoroughmethod,whichchecksifthemailboxexistsbyconnectingtotheSMTPserver.Impl

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Safe Exam Browser

Safe Exam Browser

Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.