Home  >  Article  >  Operation and Maintenance  >  Explore the use of physical memory resources (or RAM) in OpenResty and Nginx's shared memory area?

Explore the use of physical memory resources (or RAM) in OpenResty and Nginx's shared memory area?

青灯夜游
青灯夜游forward
2020-08-29 10:16:142846browse

Explore the use of physical memory resources (or RAM) in OpenResty and Nginx's shared memory area?

OpenResty and Nginx servers are usually configured with a shared memory area to store data shared between all worker processes. For example, the Nginx standard modules ngx_http_limit_req and ngx_http_limit_conn use shared memory areas to store state data to limit user request rate and user request concurrency across all worker processes. OpenResty's ngx_lua module provides shared memory-based data dictionary storage to user Lua code through lua_shared_dict.

This article explores how these shared memory areas use physical memory resources (or RAM) through several simple and independent examples. We also explore the impact of shared memory usage on system-level process memory metrics, such as VSZ and RSS in results from system tools such as ps and other indicators.

Like almost all technical articles in this blog site, we use OpenResty XRay This dynamic tracking product is used unmodified Perform in-depth analysis and visualization of the internals of OpenResty or Nginx servers and applications. Because OpenResty module. This ensures that the internal state of the target process we see through the OpenResty XRay analysis tool is completely consistent with the state without observers.

We will use the

ngx_lua module's lua_shared_dict in most of the examples because this module can be programmed with custom Lua code. The behavior and issues we demonstrate in these examples also apply to other shared memory areas in all standard Nginx modules and third-party modules.

Slab and memory pages

Nginx and its modules usually use the

slab allocator in the Nginx core to manage the space in the shared memory area. This slab allocator is designed to allocate and free smaller blocks of memory within a fixed-size memory area.

On the basis of slab, the shared memory area will introduce higher-level data structures, such as red-black trees and linked lists, etc.

slab may be as small as a few bytes or as large as spanning multiple memory pages.

The operating system manages the shared memory (or other types of memory) of the process in units of memory pages.

In
x86_64 Linux systems, the default memory page size is usually 4 KB, but the exact size depends on the architecture and the configuration of the Linux kernel. For example, some Aarch64 Linux systems have memory page sizes as high as 64 KB.

We will see the details of the shared memory areas of the OpenResty and Nginx processes at the memory page level and slab level respectively.

Allocated memory is not necessarily consumed

Unlike resources like hard disks, physical memory (or RAM) is always a very precious resource.
Most modern operating systems implement an optimization technology called Demand paging (demand-paging), which is used to reduce the pressure on RAM resources by user applications. Specifically, when you allocate a large block of memory, the operating system kernel will defer the actual allocation of RAM resources (or physical memory pages) until the data in the memory page is actually used. For example, if a user process is allocated 10 memory pages, but only uses 3 memory pages, the operating system may have mapped only these 3 memory pages to the RAM device. This behavior also applies to shared memory areas allocated in Nginx or OpenResty applications. The user can configure a huge shared memory area in the nginx.conf file, but he may notice that after the server starts, there is almost no additional memory occupied. After all, there is usually almost no sharing when it is first started. The memory page is actually used.

Empty shared memory area

We take the following nginx.conf file as an example. This file allocates an empty shared memory area and never uses it:

master_process on;
worker_processes 2;

events {
    worker_connections 1024;
}

http {
    lua_shared_dict dogs 100m;

    server {
        listen 8080;

        location = /t {
            return 200 "hello world\n";
        }
    }
}

We configure a 100 MB shared memory area named through the lua_shared_dict directive dogs. And we configured 2 worker processes for this server. Please note that we never touch this dogs area in the configuration, so this area is empty.

You can start this server through the following command:

mkdir ~/work/
cd ~/work/
mkdir logs/ conf/
vim conf/nginx.conf  # paste the nginx.conf sample above here
/usr/local/openresty/nginx/sbin/nginx -p $PWD/

Then use the following command to check whether the nginx process is running:

$ ps aux|head -n1; ps aux|grep nginx
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
agentzh   9359  0.0  0.0 137508  1576 ?        Ss   09:10   0:00 nginx: master process /usr/local/openresty/nginx/sbin/nginx -p /home/agentzh/work/
agentzh   9360  0.0  0.0 137968  1924 ?        S    09:10   0:00 nginx: worker process
agentzh   9361  0.0  0.0 137968  1920 ?        S    09:10   0:00 nginx: worker process

The memory size occupied by these two worker processes is very large near. Below we focus on the work process with PID 9360. In the Web graphical interface of the OpenResty XRay console, we can see that this process occupies a total of 134.73 MB of virtual memory and 1.88 MB of resident memory, which is consistent with the above. The ps command output in the article has exactly the same result:

Explore the use of physical memory resources (or RAM) in OpenResty and Nginxs shared memory area?

As our other article "How OpenResty and Nginx allocate and manage memory "As introduced in , what we are most concerned about is the usage of resident memory. Resident memory actually maps hardware resources to corresponding memory pages (such as RAM 1). So we see from the picture that the amount of memory actually mapped to the hardware resources is very small, only 1.88MB in total. The 100 MB shared memory area configured above only accounts for a small part of this resident memory (see the subsequent discussion for details).

Of course, all 100 MB of the shared memory area is contributed to the total virtual memory of the process. The operating system will reserve a virtual memory address space for this shared memory area. However, this is just a bookkeeping record and does not occupy any RAM resources or other hardware resources at this time.

EmptyNot Nothing

We can pass the "application-level memory usage of the process Classification Details" diagram to check whether the empty shared memory area occupies resident (or physical) memory.

Explore the use of physical memory resources (or RAM) in OpenResty and Nginxs shared memory area?

Interestingly, we see a non-zero Nginx Shm Loaded (loaded Nginx shared memory) component in this graph. The part is small, only 612 KB, but it shows up nonetheless. So the empty shared memory area is not empty. This is because Nginx has placed some metadata in the newly initialized shared memory area for bookkeeping purposes. These metadata are used by Nginx's slab allocator.

Loaded and unloaded memory pages

We can view the sharing through the following chart automatically generated by OpenResty XRay The number of memory pages actually used (or loaded) in the memory area.

Explore the use of physical memory resources (or RAM) in OpenResty and Nginxs shared memory area?

We found that the memory size that has been loaded (or actually used) in the dogs area is 608 KB, and there is a special ngx_accept_mutex_ptr Automatically allocated by the Nginx core for the accept_mutex function.

The combined size of these two parts of memory is 612 KB, which is exactly the size of Nginx Shm Loaded shown in the pie chart above.

如前文所述,dogs 区使用的 608 KB 内存实际上是 slab 分配器 使用的元数据。

未加载的内存页只是被保留的虚拟内存地址空间,并没有被使用过。

关于进程的页表

我们没有提及的一种复杂性是,每一个 nginx 工作进程其实都有各自的页表。CPU 硬件或操作系统内核正是通过查询这些页表来查找虚拟内存页所对应的存储。因此每个进程在不同共享内存区内可能有不同的已加载页集合,因为每个进程在运行过程中可能访问过不同的内存页集合。为了简化这里的分析,OpenResty XRay 会显示所有的为任意一个工作进程加载过的内存页,即使当前的目标工作进程从未碰触过这些内存页。也正因为这个原因,已加载内存页的总大小可能(略微)高于目标进程的常驻内存的大小。

空闲的和已使用的 slab

如上文所述,Nginx 通常使用 slabs 而不是内存页来管理共享内存区内的空间。我们可以通过 OpenResty XRay 直接查看某一个共享内存区内已使用的和空闲的(或未使用的)slabs 的统计信息:

Explore the use of physical memory resources (or RAM) in OpenResty and Nginxs shared memory area?

如我们所预期的,我们这个例子里的大部分 slabs 是空闲的未被使用的。注意,这里的内存大小的数字远小于上一节中所示的内存页层面的统计数字。这是因为 slabs 层面的抽象层次更高,并不包含 slab 分配器针对内存页的大小补齐和地址对齐的内存消耗。

我们可以通过OpenResty XRay进一步观察在这个 dogs 区域中各个 slab 的大小分布情况:

Explore the use of physical memory resources (or RAM) in OpenResty and Nginxs shared memory area?

Explore the use of physical memory resources (or RAM) in OpenResty and Nginxs shared memory area?

我们可以看到这个空的共享内存区里,仍然有 3 个已使用的 slab 和 157 个空闲的 slab。这些 slab 的总个数为:3 + 157 = 160个。请记住这个数字,我们会在下文中跟写入了一些用户数据的 dogs 区里的情况进行对比。

写入了用户数据的共享内存区

下面我们会修改之前的配置示例,在 Nginx 服务器启动时主动写入一些数据。具体做法是,我们在 nginx.conf 文件的 http {} 配置分程序块中增加下面这条 init_by_lua_block 配置指令:

init_by_lua_block {
    for i = 1, 300000 do
        ngx.shared.dogs:set("key" .. i, i)
    end
}

这里在服务器启动的时候,主动对 dogs 共享内存区进行了初始化,写入了 300,000 个键值对。

然后运行下列的 shell 命令以重新启动服务器进程:

kill -QUIT `cat logs/nginx.pid`
/usr/local/openresty/nginx/sbin/nginx -p $PWD/

新启动的 Nginx 进程如下所示:

$ ps aux|head -n1; ps aux|grep nginx
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
agentzh  29733  0.0  0.0 137508  1420 ?        Ss   13:50   0:00 nginx: master process /usr/local/openresty/nginx/sbin/nginx -p /home/agentzh/work/
agentzh  29734 32.0  0.5 138544 41168 ?        S    13:50   0:00 nginx: worker process
agentzh  29735 32.0  0.5 138544 41044 ?        S    13:50   0:00 nginx: worker process

虚拟内存与常驻内存

针对 Nginx 工作进程 29735,OpenResty XRay 生成了下面这张饼图:

Explore the use of physical memory resources (or RAM) in OpenResty and Nginxs shared memory area?

显然,常驻内存的大小远高于之前那个空的共享区的例子,而且在总的虚拟内存大小中所占的比例也更大(29.6%)。

虚拟内存的使用量也略有增加(从 134.73 MB 增加到了 135.30 MB)。因为共享内存区本身的大小没有变化,所以共享内存区对于虚拟内存使用量的增加其实并没有影响。这里略微增大的原因是我们通过 init_by_lua_block 指令新引入了一些 Lua 代码(这部分微小的内存也同时贡献到了常驻内存中去了)。

应用层面的内存使用量明细显示,Nginx 共享内存区域的已加载内存占用了最多常驻内存:

Explore the use of physical memory resources (or RAM) in OpenResty and Nginxs shared memory area?

已加载和未加载内存页

Now in this dogs shared memory area, there are a lot more loaded memory pages, and the number of unloaded memory pages has also been significantly reduced:

Explore the use of physical memory resources (or RAM) in OpenResty and Nginxs shared memory area?

Empty and used slabs

Nowdogs The shared memory area has added 300,000 used slabs (Except for the 3 slabs in the empty shared memory area that are always pre-allocated):

Explore the use of physical memory resources (or RAM) in OpenResty and Nginxs shared memory area?

Obviously, each of the lua_shared_dict areas Key-value pairs actually directly correspond to a slab.

The number of free slabs is exactly the same as the previous number in the empty shared memory area, that is, 157 slabs:

1Explore the use of physical memory resources (or RAM) in OpenResty and Nginxs shared memory area?

False Memory leaks

As we demonstrated above, the shared memory area does not actually consume physical memory resources until the application actually accesses the memory pages inside it. For this reason, users may observe that the resident memory size of the Nginx worker process appears to continuously grow, especially immediately after the process is started. This can lead users to mistakenly believe that there is a memory leak. The picture below shows such an example:

1Explore the use of physical memory resources (or RAM) in OpenResty and Nginxs shared memory area?

By looking at the application-level memory usage details graph generated by OpenResty XRay, we can clearly see the shared memory of Nginx The area actually occupies most of the resident memory space:

1Explore the use of physical memory resources (or RAM) in OpenResty and Nginxs shared memory area?

#This memory growth is temporary and will stop when the shared memory area is filled. However, when the user configures the shared memory area to be extremely large, so large that it exceeds the available physical memory in the current system, there is still a potential risk. Because of this, we should pay attention to the histogram of memory usage at the memory page level as shown below:

1Explore the use of physical memory resources (or RAM) in OpenResty and Nginxs shared memory area?

The blue part of the graph may eventually be used by the process exhaustion (i.e. turns red), which will have an impact on the current system.

HUP reload

Nginx supports using the HUP signal to reload the server's configuration without exiting its master process (the worker process will still Exit gracefully and restart). Usually, the Nginx shared memory area will automatically inherit the original data after HUP reload. Therefore, those physical memory pages originally allocated for the shared memory pages that have been accessed will also be retained. Therefore, attempts to release the resident memory space in the shared memory area through HUP reloading will fail. Users should use Nginx's restart or binary upgrade operations instead.

It is worth reminding that a certain Nginx module still has the right to decide whether to maintain the original data after HUP is reloaded. So there may be exceptions.

Conclusion

We have explained above that the physical memory resources occupied by Nginx’s shared memory area may be far less than nginx.conf The size configured in the file. This is thanks to the Demand Paging feature in modern operating systems. We demonstrated that some memory pages and slabs will still be used in the empty shared memory area to store metadata required by the slab allocator itself. Through the advanced analyzer of OpenResty

On the other hand, the optimization of on-demand paging will also cause the memory to continue to grow for a certain period of time. This is not actually a memory leak, but it still poses a certain risk. We also explained that Nginx's HUP reload operation usually does not clear the existing data in the shared memory area

Recommended tutorial:

nginx tutorial

The above is the detailed content of Explore the use of physical memory resources (or RAM) in OpenResty and Nginx's shared memory area?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:segmentfault.com. If there is any infringement, please contact admin@php.cn delete