Usage of ngx_shmem
ngx_shmem.c/h file is just a simple call to mmap()/munmap() system call or shmget()/shmdt() of packaging. Implemented the ngx style basic library, which can apply for and release a continuous shared memory space. It is generally used for fixed-length shared data. The data length is fixed and will not expand or shrink during use.
typedef struct { u_char *addr; size_t size; ... } ngx_shm_t; ngx_int_t ngx_shm_alloc(ngx_shm_t *shm); void ngx_shm_free(ngx_shm_t *shm);
The usage process of shared memory in ngxin is generally created by the master process, and the worker process obtains the memory pointer through inheritance.
Regarding the use of ngx_shmem, you can refer to some fragments in ngx_event_module_init(). This part of the code creates several variables in the shared memory for recording requests in various states (accepted/reading/writing...) Quantity, and perform addition and subtraction statistical operations on these variables at several key event entries in ngx_event_module. Implement statistics on the current request status of all worker processes.
shm.size = size; ngx_str_set(&shm.name, "nginx_shared_zone"); shm.log = cycle->log; if (ngx_shm_alloc(&shm) != ngx_ok) { return ngx_error; } shared = shm.addr; ... ngx_stat_accepted = (ngx_atomic_t *) (shared + 3 * cl); ngx_stat_handled = (ngx_atomic_t *) (shared + 4 * cl); ngx_stat_requests = (ngx_atomic_t *) (shared + 5 * cl); ngx_stat_active = (ngx_atomic_t *) (shared + 6 * cl); ngx_stat_reading = (ngx_atomic_t *) (shared + 7 * cl); ngx_stat_writing = (ngx_atomic_t *) (shared + 8 * cl); ngx_stat_waiting = (ngx_atomic_t *) (shared + 9 * cl);
For more details about this function, you can view the ngx_stat_stub macro definition related code and ngx_http_stub_status_module in the code.
Usage of ngx_slab
ngx_shmem is a minimalist package that implements the basic functions of shared memory. However, most of the scene shared data in our program does not have a fixed-size structure, but is more a data structure with a variable size such as ngx_array, ngx_list, ngx_queue, and ngx_rbtree.
We hope to have a memory pool that can dynamically apply for and release space like ngx_pool_t. ngx_slab is just such a structure. In principle, it is similar to the system's malloc() in that it uses a series of algorithms to apply for and release memory segments. It’s just that the object operated by ngx_slab is shared memory based on ngx_shmem.
Let’s take a look at the interface of ngx_slab first
typedef struct { ngx_shmtx_t mutex; ... void *data; /* 一般存放从pool中申请获得的根数据地址(pool中第一个申请的数据接口) */ void *addr; /* 使用ngx_shmem申请获得的共享内存基地址 */ } ngx_slab_pool_t; void ngx_slab_init(ngx_slab_pool_t *pool); void *ngx_slab_alloc(ngx_slab_pool_t *pool, size_t size); void *ngx_slab_alloc_locked(ngx_slab_pool_t *pool, size_t size); void *ngx_slab_calloc(ngx_slab_pool_t *pool, size_t size); void *ngx_slab_calloc_locked(ngx_slab_pool_t *pool, size_t size); void ngx_slab_free(ngx_slab_pool_t *pool, void *p); void ngx_slab_free_locked(ngx_slab_pool_t *pool, void *p);
You can see that the interface is not complicated. The difference between alloc and calloc is whether to clear the memory segment obtained by the application. The interface at the end of _locked indicates the operation. The pool has already acquired the lock. There is a ngx_shmtx_t mutex in the ngx_slab_pool_t structure, which is used to synchronize concurrent scenarios where multiple processes access the pool at the same time. Note that ngx_slab_alloc() will first acquire the lock, then apply for space, and finally release the lock. And ngx_slab_alloc_locked() directly applies for space, thinking that the program has obtained the lock in other logic.
Using ngx_shmem in nginx development generally requires following the following initialization process:
The module calls the ngx_shared_memory_add() interface during the configuration parsing process to register a shared memory. Provides callback functions for shared memory size and memory initialization.
The framework uses ngx_shmem in ngx_init_cycle() to apply for memory, initialize ngx_slab, and then call back the initialization function registered by the module
The module uses ngx_slab Apply for/Whether interface
In this process, the ngx_shared_memory_add() interface and the corresponding ngx_shm_zone_t structure are involved.
struct ngx_shm_zone_s { void *data; ngx_shm_t shm; ngx_shm_zone_init_pt init; void *tag; void *sync; ngx_uint_t noreuse; /* unsigned noreuse:1; */ }; ngx_shm_zone_t *ngx_shared_memory_add(ngx_conf_t *cf, ngx_str_t *name, size_t size, void *tag);
It is worth mentioning that the noreuse attribute controls whether shared memory will be re-applied during the reload process of nginx.
Since the ngx_init_cycle() function is long, this process can be viewed by looking for the /* create shared memory */ comment or the cycle->shared_memory object to view the relevant code.
The above is the detailed content of How to use shared memory in nginx. For more information, please follow other related articles on the PHP Chinese website!

The reason why NGINX is popular is its advantages in speed, efficiency and control. 1) Speed: Adopt asynchronous and non-blocking processing, supports high concurrent connections, and has strong static file service capabilities. 2) Efficiency: Low memory usage and powerful load balancing function. 3) Control: Through flexible configuration file management behavior, modular design facilitates expansion.

The differences between NGINX and Apache in terms of community, support and resources are as follows: 1. Although the NGINX community is small, it is active and professional, and official support provides advanced features and professional services through NGINXPlus. 2.Apache has a huge and active community, and official support is mainly provided through rich documentation and community resources.

NGINXUnit is an open source application server that supports a variety of programming languages and frameworks, such as Python, PHP, Java, Go, etc. 1. It supports dynamic configuration and can adjust application configuration without restarting the server. 2.NGINXUnit supports multi-language applications, simplifying the management of multi-language environments. 3. With configuration files, you can easily deploy and manage applications, such as running Python and PHP applications. 4. It also supports advanced configurations such as routing and load balancing to help manage and scale applications.

NGINX can improve website performance and reliability by: 1. Process static content as a web server; 2. forward requests as a reverse proxy server; 3. allocate requests as a load balancer; 4. Reduce backend pressure as a cache server. NGINX can significantly improve website performance through configuration optimizations such as enabling Gzip compression and adjusting connection pooling.

NGINXserveswebcontentandactsasareverseproxy,loadbalancer,andmore.1)ItefficientlyservesstaticcontentlikeHTMLandimages.2)Itfunctionsasareverseproxyandloadbalancer,distributingtrafficacrossservers.3)NGINXenhancesperformancethroughcaching.4)Itofferssecur

NGINXUnit simplifies application deployment with dynamic configuration and multilingual support. 1) Dynamic configuration can be modified without restarting the server. 2) Supports multiple programming languages, such as Python, PHP, and Java. 3) Adopt asynchronous non-blocking I/O model to improve high concurrency processing performance.

NGINX initially solved the C10K problem and has now developed into an all-rounder who handles load balancing, reverse proxying and API gateways. 1) It is well-known for event-driven and non-blocking architectures and is suitable for high concurrency. 2) NGINX can be used as an HTTP and reverse proxy server, supporting IMAP/POP3. 3) Its working principle is based on event-driven and asynchronous I/O models, improving performance. 4) Basic usage includes configuring virtual hosts and load balancing, and advanced usage involves complex load balancing and caching strategies. 5) Common errors include configuration syntax errors and permission issues, and debugging skills include using nginx-t command and stub_status module. 6) Performance optimization suggestions include adjusting worker parameters, using gzip compression and

Diagnosis and solutions for common errors of Nginx include: 1. View log files, 2. Adjust configuration files, 3. Optimize performance. By analyzing logs, adjusting timeout settings and optimizing cache and load balancing, errors such as 404, 502, 504 can be effectively resolved to improve website stability and performance.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Zend Studio 13.0.1
Powerful PHP integrated development environment

SublimeText3 Linux new version
SublimeText3 Linux latest version

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software
