Home > Article > Operation and Maintenance > What is Nginx connection?
connection
In Nginx, connection is the encapsulation of the tcp connection, including the connected socket and read events , write events. Using the connection encapsulated by Nginx, we can easily use Nginx to handle connection-related matters, such as establishing connections, sending and receiving data, etc. (Recommended learning: nginx use)
The processing of http requests in Nginx is based on connection, so Nginx can not only be used as a web server, Can also be used as a mail server. Of course, using the connection provided by Nginx, we can deal with any backend service.
Combined with the life cycle of a tcp connection, let's take a look at how Nginx handles a connection.
First of all, when Nginx starts, it will parse the configuration file to get the port and IP address that need to be monitored. Then in the Nginx master process, first initialize the monitoring socket (create the socket, set addrreuse and other options , bind to the specified IP address port, listen again), and then fork out multiple child processes, and then the child processes will compete to accept new connections.
At this point, the client can initiate a connection to Nginx.
When the client and server establish a connection through a three-way handshake, a sub-process of Nginx will accept successfully, obtain the socket of the established connection, and then create Nginx's encapsulation of the connection, that is, ngx_connection_t Structure.
Next, set the read and write event processing function and add read and write events to exchange data with the client. Finally, Nginx or the client actively closes the connection. At this point, a connection comes to an end.
Of course, Nginx can also be used as a client to request data from other servers (such as the upstream module). At this time, the connections created with other servers are also encapsulated in ngx_connection_t.
As a client, Nginx first obtains an ngx_connection_t structure, then creates a socket and sets the socket's attributes (such as non-blocking). Then add read and write events, call connect/read/write to call the connection, and finally close the connection and release ngx_connection_t.
In Nginx, each process has a maximum limit on the number of connections, which is different from the system's limit on fd. In the operating system, through ulimit -n, we can get the maximum number of fds that a process can open, which is nofile. Because each socket connection will occupy one fd, this will also limit the maximum number of connections of our process. Of course, it will also directly affect the maximum number of concurrencies our program can support. When the fd is used up, creating a socket will fail.
Nginx sets the maximum number of connections supported by each process by setting worker_connectons. If this value is greater than nofile, then the actual maximum number of connections is nofile and Nginx will warn.
Nginx is managed through a connection pool when implemented. Each worker process has an independent connection pool, and the size of the connection pool is worker_connections. What is saved in the connection pool here is not actually a real connection, it is just an array of ngx_connection_t structure of worker_connections size.
And, Nginx will save all free ngx_connection_t through a linked list free_connections. Every time a connection is obtained, it will get one from the free connection list. After using it, it will be put back into the free connection list.
Here, many people will misunderstand the meaning of the worker_connections parameter, thinking that this value is the maximum value that Nginx can establish a connection. In fact, this value represents the maximum number of connections that can be established by each worker process. Therefore, the maximum number of connections that can be established by an Nginx should be worker_connections * worker_processes.
Of course, what we are talking about here is the maximum number of connections. For HTTP requests for local resources, the maximum number of concurrencies that can be supported is worker_connections * worker_processes. If it is HTTP as a reverse proxy, the maximum number of concurrencies should be is worker_connections * worker_processes/2.
Because as a reverse proxy server, each concurrent connection will establish a connection with the client and a connection with the back-end service, which will occupy two connections.
Well, we have said before that after a client connects, multiple idle processes will compete for the connection. It is easy to see that this competition will lead to unfairness. If a process gets accept There are more opportunities, and its idle connections will be used up quickly. If some control is not done in advance, when a new tcp connection is accepted, because it cannot get an idle connection and cannot transfer this connection to other processes, it will eventually As a result, the tcp connection cannot be processed and is terminated.
Obviously, this is unfair. Some processes have free connections but have no chance to process them. Some processes artificially discard connections because they have no free connections. So, how to solve this problem?
First of all, Nginx processing must first turn on the accept_mutex option. At this time, only the process that has obtained accept_mutex will add the accept event. In other words, Nginx will control whether the process adds the accept event.
Nginx uses a variable called ngx_accept_disabled to control whether to compete for the accept_mutex lock.
In the first piece of code, calculate the value of ngx_accept_disabled. This value is one-eighth of the total number of connections in a single Nginx process. Subtract the remaining number of idle connections. The resulting ngx_accept_disabled has a pattern. , when the number of remaining connections is less than one-eighth of the total number of connections, its value is greater than 0, and the smaller the number of remaining connections, the larger this value is.
Look at the second piece of code. When ngx_accept_disabled is greater than 0, it will not try to acquire the accept_mutex lock and decrement ngx_accept_disabled by 1. Therefore, every time it is executed here, it will be decremented by 1 until it is less than 0.
Not acquiring the accept_mutex lock is equivalent to giving up the opportunity to obtain the connection. It is obvious that when there are fewer free connections, the larger ngx_accept_disable is, so the more opportunities are given out, so that other processes The chance of acquiring the lock is also greater.
If you don't accept, your own connection will be controlled, and the connection pool of other processes will be used. In this way, Nginx controls the balance of connections between multiple processes.
ngx_accept_disabled = ngx_cycle->connection_n / 8 - ngx_cycle->free_connection_n; if (ngx_accept_disabled > 0) { ngx_accept_disabled--; } else { if (ngx_trylock_accept_mutex(cycle) == NGX_ERROR) { return; } if (ngx_accept_mutex_held) { flags |= NGX_POST_EVENTS; } else { if (timer == NGX_TIMER_INFINITE || timer > ngx_accept_mutex_delay) { timer = ngx_accept_mutex_delay; } } }
The connection will be introduced here first. Just know what a connection is in Nginx, and the connection is a relatively advanced usage.
The above is the detailed content of What is Nginx connection?. For more information, please follow other related articles on the PHP Chinese website!