Home  >  Article  >  Backend Development  >  nginx source code study notes (21) - event module 2 - event-driven core ngx_process_events_and_timers

nginx source code study notes (21) - event module 2 - event-driven core ngx_process_events_and_timers

WBOY
WBOYOriginal
2016-07-29 09:11:021685browse

First of all, let’s continue to recall that there is an uninvolved content ngx_process_events_and_timers in the previous sub-thread execution operation. Today we will study this function.

This article comes from: http://blog.csdn.net/lengzijian/article/details/7601730

First, let’s take a look at some screenshots of Section 19:

nginx 源码学习笔记(二十一)—— event 模块二 ——事件驱动核心ngx_process_events_and_timers

Today we mainly explain the event-driven function, the red part in the picture:

[cpp] view plaincopyprint?

  1. src/event/ngx_event.c
  2. void
  3. ngx_process_events_and_timers(ngx _cycle_t *cycle)
  4. {
  5. ngx_uint_t flags;
  6. ngx_msec_t timer, delta;
  7. (ngx_timer_resolution) time r = NGX_TIMER_INFINITE; flags = 0;
  8. } else
  9. {
  10. timer = ngx_event_find_timer(); flags = NGX_UPDATE_TIME;
  11. }
  12. /*
  13. The ngx_use_accept_mutex variable represents whether to use accept The mutex
  14. is used by default and can be turned off through the accept_mutex off; command; */
  15.                                                                                                             
  16.                                   ngx_accept_disabled variable is calculated in the ngx_event_accept function.
  17. If ngx_accept_disabled is greater than 0, it means that the process accepts too many links, Therefore, it gives up an opportunity to compete for the accept mutex and reduces itself by one.
  18. Then, continue to process events on existing connections.
  19. nginx takes advantage of this to implement basic load balancing of inherited connections.
  20.                                                                          
  21.                 ngx_accept_disabled--; /*
  22. Try to lock Accept Mutex. Only by successfully obtaining the lock can the Listen put word in EPOLL.
  23. Therefore, this ensures that only one process has the listening socket, so when all processes are blocked in epoll_wait,
  24. will not cause a group panic.​​​​
  25.                                                                                                                                                                                                                                                             If the process acquires the lock, a NGX_POST_EVENTS flag will be added.
  26. The function of this flag is to put all generated events into a queue, and then process the events slowly after they are released. Because the processing time may be time -consuming, if the lock is not applied first before processing, the process will occupy the lock for a long time. The efficiency is low.
  27.                                                                                                            
  28. }
  29. else{
  30. Not obtained The resulting process, of course, does not need the NGX_POST_EVENTS flag.
  31. But you need to set the delay time before fighting for the lock.
  32. */
  33. (timer == ngx_timer_infinite
  34. || timer & gt; ngx_accept_mutex_dlay) {
  35. timer = ngx_acceth_mutex_delay;
  36.                                                                                                 
  37. delta = ngx_current_msec;
  38. /*Next, epoll will start the wait event ,
  39. The specific implementation of ngx_process_events corresponds to the ngx_epoll_process_events function in the epoll module
  40. Will be explained in detail here later
  41.  */
  42.  ( void
  43. ) ngx_process_events(cycle, timer, flags);
  44. / / Statistics of the time consumption of this wait event
  45. delta = ngx_current_msec-delta;
  46. ngx_log_debug1(NGX_LOG_DEBUG_EVENT, cycle->log, 0,
  47.                                           
  48. /*
  49. ngx_posted_accept_events is An event queue
  50. , which temporarily stores the accept event epoll waits from the listening socket.
  51. After the NGX_POST_EVENTS flag mentioned above is used, all accept events will be temporarily stored in this queue
  52. */
  53. if
  54. (ngx_posted_accept_events ) { ngx_event_process_posted(cycle, &ngx_posted_accept_events);
  55. }
  56. //After all accept events are processed, If the lock is held, release it.
  57. if (ngx_accept_mutex_held) {
  58. ngx_shmtx_unlock(&ngx_accept_mutex);
  59. }
  60. /*
  61. delta is before For statistical time-consuming, if there is millisecond-level time-consuming, check the timers of all times.
  62. If it is timeout, delete the expired timer from the time rbtree, and call the handler function of the corresponding event to process
  63. */
  64. if (delta) {
  65. ngx_event_expire_timers();
  66. }
  67. ngx_log_debug1(NGX_LOG_DEBUG_EVENT, cycle-> ;log, 0,
  68.                                                                                              /*
  69. Processing ordinary events (read and write data obtained on the connection event),
  70. Because each event has its own handler method,
  71. */
  72. if
  73. (ngx_posted_events) {
  74. (ngx_threaded) {
  75.              ngx_wakeup_worker_thread(cycle);                                              : }
  76. }
  77. I have talked about the accept event before. In fact, it is to monitor whether there are new events on the socket. Here is the handler method of the accept time:
  78. ngx_event_accept:
  79. [cpp] view plaincopyprint?
    1. src/event/ngx_event_accept.c  
    2.   
    3. void  
    4. ngx_event_accept(ngx_event_t *ev)  
    5. {  
    6.     socklen_t          socklen;  
    7.     ngx_err_t          err;  
    8.     ngx_log_t         *log;  
    9.     ngx_socket_t       s;  
    10.     ngx_event_t       *rev, *wev;  
    11.     ngx_listening_t   *ls;  
    12.     ngx_connection_t  *c, *lc;  
    13.     ngx_event_conf_t  *ecf;  
    14.     u_char             sa[NGX_SOCKADDRLEN];  
    15.       
    16.     //省略部分代码  
    17.   
    18.     lc = ev->data;  
    19.     ls = lc->listening;  
    20.     ev->ready = 0;  
    21.   
    22.     ngx_log_debug2(NGX_LOG_DEBUG_EVENT, ev->log, 0,  
    23.                    "accept on %V, ready: %d", &ls->addr_text, ev->available);  
    24.   
    25.     do {  
    26.         socklen = NGX_SOCKADDRLEN;  
    27.         //accept一个新的连接  
    28.         s = accept(lc->fd, (struct sockaddr *) sa, &socklen);  
    29.         //省略部分代码  
    30.           
    31.         /* 
    32.         accept到一个新的连接后,就重新计算ngx_accept_disabled的值, 
    33.         它主要是用来做负载均衡,之前有提过。 
    34.          
    35.         这里,我们可以看到他的就只方式 
    36.         “总连接数的八分之一   -   剩余的连接数“ 
    37.         总连接指每个进程设定的最大连接数,这个数字可以再配置文件中指定。After the 7/8 of the total number of connections to the total number of connections, the NGX_ACCEPT_DISABLED is greater than zero, and the connection is overloaded
    38. */
    39. ngx_accept_disabled = ngx_cycle->connection_n / 8
    40. - ngx_cycle->free_connection_n;
    41. c = ngx_get_connection(s, ev->log);
    42. //Only released when the connection is closed pool
    43. if (c->pool == NULL ) {
    44. ngx_close_accepted_connection(c);
    45.                                                                                    );                                                                                                                                                                                
    46. ngx_memcpy(c->sockaddr, sa, socklen);
    47. log = ngx_palloc(c->pool, sizeof
    48. (ngx_log_t));
    49. if (log == null) { ngx_close_accepted_connection (c);
    50. Roturn
    51. ; }  
    52.   
    53.         /* set a blocking mode for aio and non-blocking mode for others */  
    54.   
    55.         if (ngx_inherited_nonblocking) {  
    56.             if (ngx_event_flags & NGX_USE_AIO_EVENT) {  
    57.                 if (ngx_blocking(s) == -1) {  
    58.                     ngx_log_error(NGX_LOG_ALERT, ev->log, ngx_socket_errno,  
    59.                                   ngx_blocking_n " failed");  
    60.                     ngx_close_accepted_connection(c);  
    61.                     return;  
    62.                 }  
    63.             }  
    64.   
    65.         } else {  
    66.             //我们使用epoll模型,这里我们设置连接为nonblocking  
    67.             if (!(ngx_event_flags & (NGX_USE_AIO_EVENT|NGX_USE_RTSIG_EVENT))) {  
    68.                 if (ngx_nonblocking(s) == -1) {  
    69.                     ngx_log_error(NGX_LOG_ALERT, ev->log, ngx_socket_errno,  
    70.                                   ngx_nonblocking_n " failed");  
    71.                     ngx_close_accepted_connection(c);  
    72.                     return;  
    73.                 }  
    74.             }  
    75.         }  
    76.   
    77.         *log = ls->log;  
    78.         //初始化新的连接  
    79.         c->recv = ngx_recv;  
    80.         c->send = ngx_send;  
    81.         c->recv_chain = ngx_recv_chain;  
    82.         c->send_chain = ngx_send_chain;  
    83.   
    84.         c->log = log;  
    85.         c->pool->log = log;  
    86.   
    87.         c->socklen = socklen;  
    88.         c->listening = ls;  
    89.         c->local_sockaddr = ls->sockaddr;  
    90.   
    91.         c->unexpected_eof = 1;  
    92.   
    93. #if (NGX_HAVE_UNIX_DOMAIN)  
    94.         if (c->sockaddr->sa_family == AF_UNIX) {  
    95.             c->tcp_nopush = NGX_TCP_NOPUSH_DISABLED;  
    96.             c->tcp_nodelay = NGX_TCP_NODELAY_DISABLED;  
    97. #if (NGX_SOLARIS)  
    98.             /* Solaris's sendfilev() supports AF_NCA, AF_INET, and AF_INET6 */  
    99.             c->sendfile = 0;  
    100. #endif  
    101.         }  
    102. #endif  
    103.   
    104.         rev = c->read;  
    105.         wev = c->write;  
    106.   
    107.         wev->ready = 1;  
    108.   
    109.         if (ngx_event_flags & (NGX_USE_AIO_EVENT|NGX_USE_RTSIG_EVENT)) {  
    110.             /* rtsig, aio, iocp */  
    111.             rev->ready = 1;  
    112.         }  
    113.   
    114.         if (ev->deferred_accept) {  
    115.             rev->ready = 1;  
    116. #if (NGX_HAVE_KQUEUE)  
    117.             rev->available = 1;  
    118. #endif  
    119.         }  
    120.   
    121.         rev->log = log;  
    122.         wev->log = log;  
    123.   
    124.         /* 
    125.          * TODO: MT: - ngx_atomic_fetch_add() 
    126.          *             or protection by critical section or light mutex 
    127.          * 
    128.          * TODO: MP: - allocated in a shared memory 
    129.          *           - ngx_atomic_fetch_add() 
    130.          *             or protection by critical section or light mutex 
    131.          */  
    132.   
    133.         c->number = ngx_atomic_fetch_add(ngx_connection_counter, 1);  
    134.           
    135.         if (ngx_add_conn && (ngx_event_flags & NGX_USE_EPOLL_EVENT) == 0) {  
    136.             if (ngx_add_conn(c) == NGX_ERROR) {  
    137.                 ngx_close_accepted_connection(c);  
    138.                 return;  
    139.             }  
    140.         }  
    141.   
    142.         log->data = NULL;  
    143.         log->handler = NULL;  
    144.           
    145.         /* 
    146.         这里listen handler很重要,它将完成新连接的最后初始化工作, 
    147.         同时将accept到的新的连接放入epoll中;挂在这个handler上的函数, 
    148.         就是ngx_http_init_connection 在之后http模块中在详细介绍 
    149.         */  
    150.         ls->handler(c);  
    151.   
    152.         if (ngx_event_flags & NGX_USE_KQUEUE_EVENT) {  
    153.             ev->available--;  
    154.         }  
    155.   
    156.     } while (ev->available);  
    157. }  

    accpt事件的handler方法也就是如此了。之后就是每个连接的读写事件handler方法,这一部分会直接将我们引入http模块,我们还不急,还要学习下nginx经典模块epoll。

    The above introduces the nginx source code study notes (21) - event module 2 - the event-driven core ngx_process_events_and_timers, including queue content. I hope it will be helpful to friends who are interested in PHP tutorials.

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn