Four-layer load balancing vs. seven-layer load balancing
It is often said that seven-layer load balancing or four-layer load balancing is actually based on the name of the layer of the iso osi network model. The final decision is that nginx is called seven-layer load balancing because it uses the http protocol to perform load balancing operations at the application layer. For example, lvs that performs load balancing operations on the tcp layer is called layer 4 load balancing. Generally speaking, there are the following load balancing classifications:
Common software support
Common load balancing algorithms
The common load balancing algorithms are as follows:
Load balancing demonstration example : Ordinary polling
Next use nginx to demonstrate how to perform ordinary polling:
Preparation in advance
Beforehand, start two services on the two ports 7001/7002 to display different information. For the convenience of demonstration, I used tornado to make a mirror. The parameters passed when starting the docker container are different and used to display the services. different.
[root@kong ~]# docker run -d -p 7001:8080 liumiaocn/tornado:latest python /usr/local/bin/daemon.py "user service 1: 7001" ddba0abd24524d270a782c3fab907f6a35c0ce514eec3159357bded09022ee57 [root@kong ~]# docker run -d -p 7002:8080 liumiaocn/tornado:latest python /usr/local/bin/daemon.py "user service 1: 7002" 95deadd795e19f675891bfcd44e5ea622c95615a95655d1fd346351eca707951 [root@kong ~]# [root@kong ~]# curl http://192.168.163.117:7001 hello, service :user service 1: 7001 [root@kong ~]# [root@kong ~]# curl http://192.168.163.117:7002 hello, service :user service 1: 7002 [root@kong ~]#
Start nginx
[root@kong ~]# docker run -p 9080:80 --name nginx-lb -d nginx 9d53c7e9a45ef93e7848eb3f4e51c2652a49681e83bda6337c89a3cf2f379c74 [root@kong ~]# docker ps |grep nginx-lb 9d53c7e9a45e nginx "nginx -g 'daemon ..." 11 seconds ago up 10 seconds 0.0.0.0:9080->80/tcp nginx-lb [root@kong ~]#
nginx code snippet
Prepare the following nginx code snippet and add it to nginx/etc
http { upstream nginx_lb { server 192.168.163.117:7001; server 192.168.163.117:7002; } server { listen 80; server_name www.liumiao.cn 192.168.163.117; location / { proxy_pass http://nginx_lb; } }
How to modify default.conf in /nginx/conf.d/default.conf
You can achieve the effect by installing vim in the container, or you can do it locally Modify and then pass it in through docker cp, or modify it directly with sed. If you install vim in a container, use the following method
[root@kong ~]# docker exec -it nginx-lb sh # apt-get update ...省略 # apt-get install vim ...省略
Before modification
# cat default.conf server { listen 80; server_name localhost; #charset koi8-r; #access_log /var/log/nginx/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the php scripts to apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the php scripts to fastcgi server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param script_filename /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } #
After modification
# cat default.conf upstream nginx_lb { server 192.168.163.117:7001; server 192.168.163.117:7002; } server { listen 80; server_name www.liumiao.cn 192.168.163.117; #charset koi8-r; #access_log /var/log/nginx/host.access.log main; location / { #root /usr/share/nginx/html; #index index.html index.htm; proxy_pass http://nginx_lb; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the php scripts to apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the php scripts to fastcgi server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param script_filename /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } #
Restart nginx container
[root@kong ~]# docker restart nginx-lb nginx-lb [root@kong ~]#
Confirm the result
You can clearly see that polling is performed in order:
[root@kong ~]# curl
hello, service :user service 1: 7001
[root@kong ~]# curl
hello, service :user service 1: 7002
[root@kong ~]# curl
hello, service :user service 1: 7001
[root@kong ~]# curl
hello, service :user service 1: 7002
[root@kong ~]
#Load balancing demonstration example: weight polling
On this basis, you only need to add weight to perform weight polling
Modify default.conf
Modify default.conf as follows
# cp default.conf default.conf.org # vi default.conf # diff default.conf default.conf.org 2,3c2,3 < server 192.168.163.117:7001 weight=100; < server 192.168.163.117:7002 weight=200; --- > server 192.168.163.117:7001; > server 192.168.163.117:7002; #
Restart nginx container
[root@kong ~]# docker restart nginx-lb nginx-lb [root@kong ~]#
Confirm the results
You can see that the polling results are carried out according to the ratio of 1/3 and 2/3:
#[root @kong ~]# curl
hello, service :user service 1: 7001
[root@kong ~]# curl
hello, service :user service 1: 7002
[root@kong ~] # curl
hello, service :user service 1: 7002
[root@kong ~]
The above is the detailed content of How to use nginx for load balancing. For more information, please follow other related articles on the PHP Chinese website!

NGINX can improve website performance and reliability by: 1. Process static content as a web server; 2. forward requests as a reverse proxy server; 3. allocate requests as a load balancer; 4. Reduce backend pressure as a cache server. NGINX can significantly improve website performance through configuration optimizations such as enabling Gzip compression and adjusting connection pooling.

NGINXserveswebcontentandactsasareverseproxy,loadbalancer,andmore.1)ItefficientlyservesstaticcontentlikeHTMLandimages.2)Itfunctionsasareverseproxyandloadbalancer,distributingtrafficacrossservers.3)NGINXenhancesperformancethroughcaching.4)Itofferssecur

NGINXUnit simplifies application deployment with dynamic configuration and multilingual support. 1) Dynamic configuration can be modified without restarting the server. 2) Supports multiple programming languages, such as Python, PHP, and Java. 3) Adopt asynchronous non-blocking I/O model to improve high concurrency processing performance.

NGINX initially solved the C10K problem and has now developed into an all-rounder who handles load balancing, reverse proxying and API gateways. 1) It is well-known for event-driven and non-blocking architectures and is suitable for high concurrency. 2) NGINX can be used as an HTTP and reverse proxy server, supporting IMAP/POP3. 3) Its working principle is based on event-driven and asynchronous I/O models, improving performance. 4) Basic usage includes configuring virtual hosts and load balancing, and advanced usage involves complex load balancing and caching strategies. 5) Common errors include configuration syntax errors and permission issues, and debugging skills include using nginx-t command and stub_status module. 6) Performance optimization suggestions include adjusting worker parameters, using gzip compression and

Diagnosis and solutions for common errors of Nginx include: 1. View log files, 2. Adjust configuration files, 3. Optimize performance. By analyzing logs, adjusting timeout settings and optimizing cache and load balancing, errors such as 404, 502, 504 can be effectively resolved to improve website stability and performance.

NGINXUnitischosenfordeployingapplicationsduetoitsflexibility,easeofuse,andabilitytohandledynamicapplications.1)ItsupportsmultipleprogramminglanguageslikePython,PHP,Node.js,andJava.2)Itallowsdynamicreconfigurationwithoutdowntime.3)ItusesJSONforconfigu

NGINX can be used to serve files and manage traffic. 1) Configure NGINX service static files: define the listening port and file directory. 2) Implement load balancing and traffic management: Use upstream module and cache policies to optimize performance.

NGINX is suitable for handling high concurrency and static content, while Apache is suitable for dynamic content and complex URL rewrites. 1.NGINX adopts an event-driven model, suitable for high concurrency. 2. Apache uses process or thread model, which is suitable for dynamic content. 3. NGINX configuration is simple, Apache configuration is complex but more flexible.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

WebStorm Mac version
Useful JavaScript development tools
