Home >Backend Development >PHP Tutorial >nginx basic concepts-connection

nginx basic concepts-connection

WBOY
WBOYOriginal
2016-08-08 09:27:41904browse
In nginx, connection is an encapsulation of the tcp connection, including the connected socket, read events, and write events. Using nginx-encapsulated connection, we can easily use nginx to handle connection-related matters, such as establishing connections, sending and receiving data, etc. The processing of http requests in nginx is based on connection, so nginx can not only be used as a web server, but also as a mail server. Of course, using the connection provided by nginx, we can deal with any backend service. Combined with the life cycle of a tcp connection, let’s see how nginx handles a connection. First, when nginx starts, it will parse the configuration file to get the port and IP address that need to be monitored. Then in the nginx master process, first initialize the monitoring socket (create the socket, set addrreuse and other options, bind to the specified IP address port, then listen), and then fork out multiple child processes, and then the child processes will compete to accept new connections. At this point, the client can initiate a connection to nginx. When the client and server establish a connection through a three-way handshake, a sub-process of nginx will accept successfully, obtain the socket of the established connection, and then create nginx's encapsulation of the connection, that is, the ngx_connection_t structure. Next, set up the read and write event processing functions and add read and write events to exchange data with the client. Finally, nginx or the client actively closes the connection. At this point, a connection comes to an end. Of course, nginx can also be used as a client to request data from other servers (such as the upstream module). At this time, the connections created with other servers are also encapsulated in ngx_connection_t. As a client, nginx first obtains an ngx_connection_t structure, then creates a socket and sets the socket's attributes (such as non-blocking). Then add read and write events, call connect/read/write to call the connection, and finally close the connection and release ngx_connection_t. In nginx, each process has a maximum limit on the number of connections, which is different from the system's limit on fd. In the operating system, through ulimit -n, we can get the maximum number of fds that a process can open, which is nofile. Because each socket connection will occupy one fd, this will also limit the maximum number of connections of our process. Of course, it will also directly affect the maximum number of concurrencies our program can support. When the fd is used up, creating a socket will fail. nginx sets the maximum number of connections supported by each process by setting worker_connectons. If the value is greater than nofile, then the actual maximum number of connections is nofile, and nginx will warn. When nginx is implemented, it is managed through a connection pool. Each worker process has an independent connection pool, and the size of the connection pool is worker_connections. What is saved in the connection pool here is not actually a real connection, it is just an array of ngx_connection_t structure of the size of worker_connections. Moreover, nginx will save all free ngx_connection_t through a linked list free_connections. Each time a connection is obtained, it will get one from the free connection list. After using it, it will be put back into the free connection list. Here, many people will misunderstand the meaning of the worker_connections parameter, thinking that this value is the maximum value of the connection that nginx can establish. In fact, this value represents the maximum number of connections that can be established by each worker process. Therefore, the maximum number of connections that can be established by an nginx should be worker_connections * worker_processes. Of course, what we are talking about here is the maximum number of connections. For HTTP requests to local resources, the maximum number of concurrencies that can be supported is worker_connections * worker_processes. If HTTP is used as a reverse proxy, the maximum number of concurrencies should be worker_connections. * worker_processes/2. Because as a reverse proxy server, each concurrent connection will establish a connection with the client and a connection with the back-end service, which will occupy two connections. Well, as we said before, after a client connects, multiple idle processes will compete for the connection. It is easy to see that this competition will lead to unfairness. If a certain process has more chances to get accepted, Its idle connections will soon be used up. If some control is not done in advance, when a new TCP connection is accepted, the idle connection cannot be obtained and the connection cannot be transferred to other processes, which will eventually cause the TCP connection to be lost. If it is not processed, it will be aborted. Obviously, this is unfair. Some processes have free connections but have no chance to process them. Some processes artificially discard connections because they have no free connections. So, how to solve this problem? First, nginx processing must first turn on the accept_mutex option. At this time, only the process that has obtained accept_mutex will add the accept event. In other words, nginx will control whether the process adds the accept event. nginx uses a variable called ngx_accept_disabled to control whether to compete for the accept_mutex lock. In the first piece of code, calculate the value of ngx_accept_disabled. This value is one-eighth of the total number of connections in a single nginx process. Subtract the remaining number of idle connections. The resulting ngx_accept_disabled has a pattern. When the number of remaining connections is less than The value is greater than 0 only when one-eighth of the total number of connections is present, and the smaller the number of remaining connections, the greater the value. Looking at the second piece of code, when ngx_accept_disabled is greater than 0, it will not try to obtain the accept_mutex lock, and ngx_accept_disabled will be decremented by 1. Therefore, every time it is executed here, it will be decremented by 1 until it is less than 0. Not acquiring the accept_mutex lock is equivalent to giving up the opportunity to acquire the connection. It can be clearly seen that when there are fewer free connections, the greater ngx_accept_disable is, so the more opportunities are given out, so that the opportunities for other processes to acquire the lock are also reduced. The bigger. If you don't accept, your own connection will be controlled, and the connection pool of other processes will be used. In this way, nginx controls the balance of connections between multiple processes. ngx_accept_disabled = ngx_cycle->connection_n / 8 - ngx_cycle->free_connection_n; if (ngx_accept_disabled > 0) { ngx_accept_disabled--; } else { if (ngx_trylock_accept_mutex(cycle) == NGX_ERROR) { return; } if (ngx_accept_mutex_held) { flags |= NGX_POST_EVENTS; } else { if (timer == NGX_TIMER_INFINITE || timer > ngx_accept_mutex_delay) { timer = ngx_accept_mutex_delay; } } }

The above has introduced the basic concept of nginx - connection, including aspects of it. I hope it will be helpful to friends who are interested in PHP tutorials.

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Previous article:php optimization skillsNext article:php optimization skills