The 1.9.1 release of nginx introduces a new feature: allowing the use of the so_reuseport socket option, which is available in new versions of many operating systems, including dragonfly bsd and linux (kernel version 3.9 and later ). This socket option allows multiple sockets to listen on the same IP and port combination. The kernel is able to load balance incoming connections across these sockets. (For nginx plus customers, this feature will appear in version 7, which will be released by the end of the year)
The so_reuseport option has many potential practical applications. Other services can also use it to simply implement rolling upgrades during execution (nginx already supports rolling upgrades). For nginx, enabling this option can reduce lock contention in certain scenarios and improve performance.
As described in the figure below, when the so_reuseport option is valid, a separate listening socket notifies the worker process of the accessed connection, and each worker thread attempts to obtain the connection.
When the so_reuseport option is enabled, there are multiple socket listeners for each IP address and port binding connection, and each worker process can be assigned one . The system kernel determines which valid socket listener (and implicitly, for which worker process) gets the connection. This can reduce lock competition between worker processes when obtaining new connections (Translator's Note: Competition between worker processes requesting to obtain mutually exclusive resource locks), and can improve performance on multi-core systems. However, this also means that when a worker process falls into a blocking operation, the blocking affects not only the worker process that has accepted the connection, but also causes the worker process scheduled to be allocated by the kernel to send the connection request and therefore becomes blocked.
Set up shared socket
In order for the so_reuseport socket option to work, it should be http or tcp (stream mode) The listen item in the communication option directly introduces the latest reuseport parameter, just like the following example:
Copy code The code is as follows:
http {
server { listen 80 reuseport;
}
After referencing the reuseport parameter, the accept_mutex parameter will be invalid for the referenced socket, because the mutex (mutex) is redundant for reuseport. For ports that do not use reuseport, it is still valuable to set accept_mutex.
reuseport's benchmark performance test
I ran the benchmark tool on a 36-core AWS instance to test 4 nginx worker processes. In order to reduce the impact of the network, both the client and nginx are running locally , and let nginx return ok string instead of a file. I compared three nginx configurations: default (equivalent to accept_mutex on), accept_mutex off, and reuseport. As shown in the figure, reuseport's requests per second are two to three times that of the others, while the latency and latency standard deviation are also reduced.
latency (ms) latency stdev (ms) cpu load
accept_mutex off 15.59 26.48 10reuseport 12.35 3.15 0.3
In these performance tests, the speed of connection requests is very high, but the requests do not require a lot of processing. Other basic tests should point out that reuseport can also significantly improve performance when application traffic fits this scenario. (The reuseport parameter cannot be used in the listen directive in the mail context, such as email, because email traffic will definitely not match this scenario.) We encourage you to test first rather than directly apply it on a large scale.
The above is the detailed content of What is Socket segmentation in Nginx server. For more information, please follow other related articles on the PHP Chinese website!

NGINX is more suitable for handling high concurrent connections, while Apache is more suitable for scenarios where complex configurations and module extensions are required. 1.NGINX is known for its high performance and low resource consumption, and is suitable for high concurrency. 2.Apache is known for its stability and rich module extensions, which are suitable for complex configuration needs.

NGINXUnit improves application flexibility and performance with its dynamic configuration and high-performance architecture. 1. Dynamic configuration allows the application configuration to be adjusted without restarting the server. 2. High performance is reflected in event-driven and non-blocking architectures and multi-process models, and can efficiently handle concurrent connections and utilize multi-core CPUs.

NGINX and Apache are both powerful web servers, each with unique advantages and disadvantages in terms of performance, scalability and efficiency. 1) NGINX performs well when handling static content and reverse proxying, suitable for high concurrency scenarios. 2) Apache performs better when processing dynamic content and is suitable for projects that require rich module support. The selection of a server should be decided based on project requirements and scenarios.

NGINX is suitable for handling high concurrent requests, while Apache is suitable for scenarios where complex configurations and functional extensions are required. 1.NGINX adopts an event-driven, non-blocking architecture, and is suitable for high concurrency environments. 2. Apache adopts process or thread model to provide a rich module ecosystem that is suitable for complex configuration needs.

NGINX can be used to improve website performance, security, and scalability. 1) As a reverse proxy and load balancer, NGINX can optimize back-end services and share traffic. 2) Through event-driven and asynchronous architecture, NGINX efficiently handles high concurrent connections. 3) Configuration files allow flexible definition of rules, such as static file service and load balancing. 4) Optimization suggestions include enabling Gzip compression, using cache and tuning the worker process.

NGINXUnit supports multiple programming languages and is implemented through modular design. 1. Loading language module: Load the corresponding module according to the configuration file. 2. Application startup: Execute application code when the calling language runs. 3. Request processing: forward the request to the application instance. 4. Response return: Return the processed response to the client.

NGINX and Apache have their own advantages and disadvantages and are suitable for different scenarios. 1.NGINX is suitable for high concurrency and low resource consumption scenarios. 2. Apache is suitable for scenarios where complex configurations and rich modules are required. By comparing their core features, performance differences, and best practices, you can help you choose the server software that best suits your needs.

Question: How to start Nginx? Answer: Install Nginx Startup Nginx Verification Nginx Is Nginx Started Explore other startup options Automatically start Nginx


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

SublimeText3 English version
Recommended: Win version, supports code prompts!

SublimeText3 Chinese version
Chinese version, very easy to use

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software