Introduction to the main functions of nginx:
(Recommended tutorial: nginx tutorial)
1. Reverse proxy
Reverse proxy should be the most common thing Nginx does. What is a reverse proxy? The following is what Baidu Encyclopedia says: The reverse proxy (Reverse Proxy) method refers to using a proxy server to accept Internet traffic. The connection request is then forwarded to the server on the internal network, and the result obtained from the server is returned to the client requesting the connection on the Internet. At this time, the proxy server appears as a reverse proxy server to the outside world.
To put it simply, the real server cannot be directly accessed by the external network, so a proxy server is needed. The proxy server can be accessed by the external network and is in the same network environment as the real server. Of course, it is also possible. It's the same server, just different ports.
Paste a simple code to implement reverse proxy below:
server { listen 80; server_name localhost; client_max_body_size 1024M; location / { proxy_pass http://localhost:8080; proxy_set_header Host $host:$server_port; } }
Save the configuration file and start Nginx, so that when we access localhost, it is equivalent to accessing localhost:8080.
2. Load balancing
Load balancing is also a commonly used function of Nginx. Load balancing means to allocate execution to multiple operating units, such as: Web server, FTP server, enterprise key Application servers and other mission-critical servers, etc., to complete work tasks together.
To put it simply, when there are 2 or more servers, requests are randomly distributed to the designated server for processing according to the rules. Load balancing configuration generally requires the configuration of a reverse proxy at the same time, and the reverse proxy jumps through the reverse proxy. Go to load balancing. Nginx currently supports 3 built-in load balancing strategies, as well as 2 commonly used third-party strategies.
1. RR (default)
每个请求按时间顺序逐一分配到不同的后端服务器,如果后端服务器down掉,能自动剔除。 简单配置 upstream test { server localhost:8080; server localhost:8081; } server { listen 81; server_name localhost; client_max_body_size 1024M; location / { proxy_pass http://test; proxy_set_header Host $host:$server_port; } }
Configured 2 servers, of course it is actually one, but the ports are different, and the 8081 server does not exist, that is to say It cannot be accessed, but there will be no problem when we access http://localhost. It will jump to http://localhost:8080 by default. This is because Nginx will automatically determine the status of the server.
If the server is inaccessible (the server is down), it will not jump to this server, so it also avoids the situation where a server is down and affects the use. Since Nginx defaults to the RR policy, we No additional settings are required.
2. Weight
Specify the polling probability. The weight is proportional to the access ratio and is used when the performance of the back-end server is uneven.
For example:
upstream test { server localhost:8080 weight=9; server localhost:8081 weight=1; }
Then generally only 1 time in 10 times will access 8081, and 9 times will access 8080.
3. ip_hash
The above two methods have a problem, that is, when the next request comes, the request may be distributed to another server. When our program is not stateless, (using session to save data), there is a big problem at this time. For example, if the login information is saved in the session, then you need to log in again when you jump to another server, so many times we need If a client only accesses one server, then iphash needs to be used.
Each request of iphash is allocated according to the hash result of the access IP, so that each visitor has fixed access to a back-end server, which can solve the session problem.
upstream test { ip_hash; server localhost:8080; server localhost:8081; }
4. fair (third party)
Requests are allocated according to the response time of the backend server, and those with short response times are allocated first.
upstream backend { fair; server localhost:8080; server localhost:8081; }
5. url_hash (third party)
Distribute requests according to the hash result of the accessed URL, so that each URL is directed to the same back-end server. It is more effective when the back-end server is cached. . Add a hash statement to the upstream. Other parameters such as weight cannot be written in the server statement. Hash_method is the hash algorithm used.
upstream backend { hash $request_uri; hash_method crc32; server localhost:8080; server localhost:8081; }
The above five types of load balancing are suitable for use in different situations, so you can choose which strategy mode to use according to the actual situation. However, fair and url_hash need to install third-party modules to use, because this article mainly introduces what Nginx can do Things, so Nginx installation of third-party modules will not be introduced in this article.
3. HTTP Server
Nginx itself is also a static resource server. When there are only static resources, you can use Nginx as the server. At the same time, it is also very popular now to separate static resources from static resources. To implement it through Nginx, first look at Nginx as a static resource server.
server { listen 80; server_name localhost; client_max_body_size 1024M; location / { root e:\wwwroot; index index.html; } }
In this way, if you access http://localhost, you will access the index.html under the wwwroot directory of the E drive by default. If a website is only a static page, then it can be deployed in this way.
Separation of dynamic and static
Separation of dynamic and static allows dynamic web pages in dynamic websites to distinguish constant resources from frequently changing resources according to certain rules. After dynamic and static resources are split, We can cache static resources according to their characteristics. This is the core idea of static processing of websites.
upstream test{ server localhost:8080; server localhost:8081; } server { listen 80; server_name localhost; location / { root e:\wwwroot; index index.html; } # 所有静态请求都由nginx处理,存放目录为html location ~ \.(gif|jpg|jpeg|png|bmp|swf|css|js)$ { root e:\wwwroot; } # 所有动态请求都转发给tomcat处理 location ~ \.(jsp|do)$ { proxy_pass http://test; } error_page 500 502 503 504 /50x.html; location = /50x.html { root e:\wwwroot; } }
这样我们就可以把HTML以及图片和css以及js放到wwwroot目录下,而tomcat只负责处理jsp和请求,
例如当我们后缀为gif的时候,Nginx默认会从wwwroot获取到当前请求的动态图文件返回,当然这里的静态文件跟Nginx是同一台服务器。
我们也可以在另外一台服务器,然后通过反向代理和负载均衡配置过去就好了,只要搞清楚了最基本的流程,很多配置就很简单了,另外localtion后面其实是一个正则表达式,所以非常灵活。
四、正向代理
正向代理,意思是一个位于客户端和原始服务器(origin server)之间的服务器,为了从原始服务器取得内容,客户端向代理发送一个请求并指定目标(原始服务器),然后代理向原始服务器转交请求并将获得的内容返回给客户端。客户端才能使用正向代理。
当你需要把你的服务器作为代理服务器的时候,可以用Nginx来实现正向代理,但是目前Nginx有一个问题,那么就是不支持HTTPS,虽然我百度到过配置HTTPS的正向代理,但是到最后发现还是代理不了,当然可能是我配置的不对。
resolver 114.114.114.114 8.8.8.8; server { resolver_timeout 5s; listen 81; access_log e:\wwwroot\proxy.access.log; error_log e:\wwwroot\proxy.error.log; location / { proxy_pass http://$host$request_uri; } }
resolver是配置正向代理的DNS服务器,listen 是正向代理的端口,配置好了就可以在ie上面或者其他代理插件上面使用服务器ip+端口号进行代理了。
注意:Nginx是支持热启动的,也就是说当我们修改配置文件后,不用关闭Nginx,就可以实现让配置生效。Nginx从新读取配置的命令是:nginx -s reload。
The above is the detailed content of Introduction to the main functions of nginx. For more information, please follow other related articles on the PHP Chinese website!

NGINX and Apache are both powerful web servers, each with unique advantages and disadvantages in terms of performance, scalability and efficiency. 1) NGINX performs well when handling static content and reverse proxying, suitable for high concurrency scenarios. 2) Apache performs better when processing dynamic content and is suitable for projects that require rich module support. The selection of a server should be decided based on project requirements and scenarios.

NGINX is suitable for handling high concurrent requests, while Apache is suitable for scenarios where complex configurations and functional extensions are required. 1.NGINX adopts an event-driven, non-blocking architecture, and is suitable for high concurrency environments. 2. Apache adopts process or thread model to provide a rich module ecosystem that is suitable for complex configuration needs.

NGINX can be used to improve website performance, security, and scalability. 1) As a reverse proxy and load balancer, NGINX can optimize back-end services and share traffic. 2) Through event-driven and asynchronous architecture, NGINX efficiently handles high concurrent connections. 3) Configuration files allow flexible definition of rules, such as static file service and load balancing. 4) Optimization suggestions include enabling Gzip compression, using cache and tuning the worker process.

NGINXUnit supports multiple programming languages and is implemented through modular design. 1. Loading language module: Load the corresponding module according to the configuration file. 2. Application startup: Execute application code when the calling language runs. 3. Request processing: forward the request to the application instance. 4. Response return: Return the processed response to the client.

NGINX and Apache have their own advantages and disadvantages and are suitable for different scenarios. 1.NGINX is suitable for high concurrency and low resource consumption scenarios. 2. Apache is suitable for scenarios where complex configurations and rich modules are required. By comparing their core features, performance differences, and best practices, you can help you choose the server software that best suits your needs.

Question: How to start Nginx? Answer: Install Nginx Startup Nginx Verification Nginx Is Nginx Started Explore other startup options Automatically start Nginx

How to confirm whether Nginx is started: 1. Use the command line: systemctl status nginx (Linux/Unix), netstat -ano | findstr 80 (Windows); 2. Check whether port 80 is open; 3. Check the Nginx startup message in the system log; 4. Use third-party tools, such as Nagios, Zabbix, and Icinga.

To shut down the Nginx service, follow these steps: Determine the installation type: Red Hat/CentOS (systemctl status nginx) or Debian/Ubuntu (service nginx status) Stop the service: Red Hat/CentOS (systemctl stop nginx) or Debian/Ubuntu (service nginx stop) Disable automatic startup (optional): Red Hat/CentOS (systemctl disabled nginx) or Debian/Ubuntu (syst


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

WebStorm Mac version
Useful JavaScript development tools

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

Zend Studio 13.0.1
Powerful PHP integrated development environment