Home >System Tutorial >LINUX >Can the Cache in Linux memory really be recycled?
In Linux systems, we often use the free command to check the usage status of system memory. On a RHEL6 system, the display content of the free command is probably in the following state:
[root@tencent64 ~]# free total used free shared buffers cached Mem: 132256952 72571772 59685180 0 1762632 53034704 -/+ buffers/cache: 17774436 114482516 Swap: 2101192 508 2100684
The default display unit here is kb. My server has 128G memory, so the number appears relatively large. This command is a command that almost everyone who has used Linux must know, but the more such a command, the fewer people seem to really understand it (I mean the smaller the proportion). Under normal circumstances, the understanding of the output of this command can be divided into the following levels:
According to the current content of technical documents on the Internet, I believe that the vast majority of people who know a little bit about Linux should be at the second level. It is generally believed that the memory space occupied by buffers and cached can be released as free space when the memory pressure is high. But is that really the case? Before demonstrating this topic, let us briefly introduce what buffers and cached mean:
Buffer and cache are two terms that are overused in computer technology and have different meanings in different contexts. In Linux memory management, the buffer here refers to Linux memory: Buffer cache. The cache here refers to the Page cache in Linux memory. Translated into Chinese, it can be called buffer cache and page cache. Historically, one of them (buffer) was used as a write cache for io devices, and the other (cache) was used as a read cache for io devices. The io devices here mainly refer to block device files and Ordinary files on the file system. But now, their meaning is different. In the current kernel, the page cache, as the name suggests, is a cache for memory pages. To put it bluntly, if there is memory allocated and managed by pages, the page cache can be used as its cache to manage and use. Of course, not all memory is managed in pages, and many are managed in blocks. If the cache function is to be used for this part of the memory, it will be concentrated in the buffer cache. (From this perspective, is it better to rename the buffer cache to block cache?) However, not all blocks have a fixed length. The length of the block on the system is mainly determined by the block device used, and the page The length is 4k on X86 whether 32-bit or 64-bit.
Understanding the difference between these two cache systems, you can understand what they can be used for.
Page cache is mainly used as a cache of file data on the file system, especially when the process has read/write operations on the file. If you think about it carefully, as a system call that can map files to memory: mmap, is it natural that the page cache should also be used? In the current system implementation, the page cache is also used as a caching device for other file types, so in fact the page cache is also responsible for caching most of the block device files.
Buffer cache则主要是设计用来在系统对块设备进行读写的时候,对块进行数据缓存的系统来使用。这意味着某些对块的操作会使用buffer cache进行缓存,比如我们在格式化文件系统的时候。一般情况下两个缓存系统是一起配合使用的,比如当我们对一个文件进行写操作的时候,page cache的内容会被改变,而buffer cache则可以用来将page标记为不同的缓冲区,并记录是哪一个缓冲区被修改了。这样,内核在后续执行脏数据的回写(writeback)时,就不用将整个page写回,而只需要写回修改的部分即可。
Linux内核会在内存将要耗尽的时候,触发内存回收的工作,以便释放出内存给急需内存的进程使用。一般情况下,这个操作中主要的内存释放都来自于对buffer/cache的释放。尤其是被使用更多的cache空间。既然它主要用来做缓存,只是在内存够用的时候加快进程对文件的读写速度,那么在内存压力较大的情况下,当然有必要清空释放cache,作为free空间分给相关进程使用。所以一般情况下,我们认为buffer/cache空间可以被释放,这个理解是正确的。
但是这种清缓存的工作也并不是没有成本。理解cache是干什么的就可以明白清缓存必须保证cache中的数据跟对应文件中的数据一致,才能对cache进行释放。所以伴随着cache清除的行为的,一般都是系统IO飙高。因为内核要对比cache中的数据和对应硬盘文件上的数据是否一致,如果不一致需要写回,之后才能回收。
在系统中除了内存将被耗尽的时候可以清缓存以外,我们还可以使用下面这个文件来人工触发缓存清除的操作:
[root@tencent64 ~]# cat /proc/sys/vm/drop_caches 1
方法是:
echo 1 > /proc/sys/vm/drop_caches
当然,这个文件可以设置的值分别为1、2、3。它们所表示的含义为:echo 1 > /proc/sys/vm/drop_caches:表示清除pagecache。
echo 2 > /proc/sys/vm/drop_caches:表示清除回收slab分配器中的对象(包括目录项缓存和inode缓存)。slab分配器是内核中管理内存的一种机制,其中很多缓存数据实现都是用的pagecache。
echo 3 > /proc/sys/vm/drop_caches:表示清除pagecache和slab分配器中的缓存对象。
我们分析了cache能被回收的情况,那么有没有不能被回收的cache呢?当然有。我们先来看第一种情况:
大家知道Linux提供一种“临时”文件系统叫做tmpfs,它可以将内存的一部分空间拿来当做文件系统使用,使内存空间可以当做目录文件来用。现在绝大多数Linux系统都有一个叫做/dev/shm的tmpfs目录,就是这样一种存在。当然,我们也可以手工创建一个自己的tmpfs,方法如下:
[root@tencent64 ~]# mkdir /tmp/tmpfs [root@tencent64 ~]# mount -t tmpfs -o size=20G none /tmp/tmpfs/ [root@tencent64 ~]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 10325000 3529604 6270916 37% / /dev/sda3 20646064 9595940 10001360 49% /usr/local /dev/mapper/vg-data 103212320 26244284 71725156 27% /data tmpfs 66128476 14709004 51419472 23% /dev/shm none 20971520 0 20971520 0% /tmp/tmpfs
于是我们就创建了一个新的tmpfs,空间是20G,我们可以在/tmp/tmpfs中创建一个20G以内的文件。如果我们创建的文件实际占用的空间是内存的话,那么这些数据应该占用内存空间的什么部分呢?根据pagecache的实现功能可以理解,既然是某种文件系统,那么自然该使用pagecache的空间来管理。我们试试是不是这样?
[root@tencent64 ~]# free -g total used free shared buffers cached Mem: 126 36 89 0 1 19 -/+ buffers/cache: 15 111 Swap: 2 0 2 [root@tencent64 ~]# dd if=/dev/zero of=/tmp/tmpfs/testfile bs=1G count=13 13+0 records in 13+0 records out 13958643712 bytes (14 GB) copied, 9.49858 s, 1.5 GB/s [root@tencent64 ~]# [root@tencent64 ~]# free -g total used free shared buffers cached Mem: 126 49 76 0 1 32 -/+ buffers/cache: 15 110 Swap: 2 0 2
我们在tmpfs目录下创建了一个13G的文件,并通过前后free命令的对比发现,cached增长了13G,说明这个文件确实放在了内存里并且内核使用的是cache作为存储。再看看我们关心的指标: -/+ buffers/cache那一行。我们发现,在这种情况下free命令仍然提示我们有110G内存可用,但是真的有这么多么?我们可以人工触发内存回收看看现在到底能回收多少内存:
[root@tencent64 ~]# echo 3 > /proc/sys/vm/drop_caches [root@tencent64 ~]# free -g total used free shared buffers cached Mem: 126 43 82 0 0 29 -/+ buffers/cache: 14 111 Swap: 2 0 2
可以看到,cached占用的空间并没有像我们想象的那样完全被释放,其中13G的空间仍然被/tmp/tmpfs中的文件占用的。当然,我的系统中还有其他不可释放的cache占用着其余16G内存空间。那么tmpfs占用的cache空间什么时候会被释放呢?是在其文件被删除的时候.如果不删除文件,无论内存耗尽到什么程度,内核都不会自动帮你把tmpfs中的文件删除来释放cache空间。
[root@tencent64 ~]# rm /tmp/tmpfs/testfile [root@tencent64 ~]# free -g total used free shared buffers cached Mem: 126 30 95 0 0 16 -/+ buffers/cache: 14 111 Swap: 2 0 2
这是我们分析的第一种cache不能被回收的情况。还有其他情况,比如:
共享内存是系统提供给我们的一种常用的进程间通信(IPC)方式,但是这种通信方式不能在shell中申请和使用,所以我们需要一个简单的测试程序,代码如下:
[root@tencent64 ~]# cat shm.c #include #include #include #include #include #include #define MEMSIZE 2048 *1024*1023 int main() { int shmid; char *ptr; pid_t pid; struct shmid_ds buf; int ret; shmid = shmget(IPC_PRIVATE, MEMSIZE, 0600); if (shmid"shmget()"); exit(1); } ret = shmctl(shmid, IPC_STAT, &buf); if (ret "shmctl()"); exit(1); } printf("shmid: %d\n", shmid); printf("shmsize: %d\n", buf.shm_segsz); buf.shm_segsz *= 2; ret = shmctl(shmid, IPC_SET, &buf); if (ret "shmctl()"); exit(1); } ret = shmctl(shmid, IPC_SET, &buf); if (ret "shmctl()"); exit(1); } printf("shmid: %d\n", shmid); printf("shmsize: %d\n", buf.shm_segsz); pid = fork(); if (pid"fork()"); exit(1); } if (pid==0) { ptr = shmat(shmid, NULL, 0); if (ptr==(void*)-1) { perror("shmat()"); exit(1); } bzero(ptr, MEMSIZE); strcpy(ptr, "Hello!"); exit(0); } else { wait(NULL); ptr = shmat(shmid, NULL, 0); if (ptr==(void*)-1) { perror("shmat()"); exit(1); } puts(ptr); exit(0); } }
程序功能很简单,就是申请一段不到2G共享内存,然后打开一个子进程对这段共享内存做一个初始化操作,父进程等子进程初始化完之后输出一下共享内存的内容,然后退出。但是退出之前并没有删除这段共享内存。我们来看看这个程序执行前后的内存使用:
[root@tencent64 ~]# free -g total used free shared buffers cached Mem: 126 30 95 0 0 16 -/+ buffers/cache: 14 111 Swap: 2 0 2 [root@tencent64 ~]# ./shm shmid: 294918 shmsize: 2145386496 shmid: 294918 shmsize: -4194304 Hello! [root@tencent64 ~]# free -g total used free shared buffers cached Mem: 126 32 93 0 0 18 -/+ buffers/cache: 14 111 Swap: 2 0 2 cached空间由16G涨到了18G。那么这段cache能被回收么?继续测试: [root@tencent64 ~]# echo 3 > /proc/sys/vm/drop_caches [root@tencent64 ~]# free -g total used free shared buffers cached Mem: 126 32 93 0 0 18 -/+ buffers/cache: 14 111 Swap: 2 0 2
结果是仍然不可回收。大家可以观察到,这段共享内存即使没人使用,仍然会长期存放在cache中,直到其被删除。删除方法有两种,一种是程序中使用shmctl()去IPC_RMID,另一种是使用ipcrm命令。我们来删除试试:
[root@tencent64 ~]# ipcs -m ------ Shared Memory Segments -------- key shmid owner perms bytes nattch status 0x00005feb 0 root 666 12000 4 0x00005fe7 32769 root 666 524288 2 0x00005fe8 65538 root 666 2097152 2 0x00038c0e 131075 root 777 2072 1 0x00038c14 163844 root 777 5603392 0 0x00038c09 196613 root 777 221248 0 0x00000000 294918 root 600 2145386496 0 [root@tencent64 ~]# ipcrm -m 294918 [root@tencent64 ~]# ipcs -m ------ Shared Memory Segments -------- key shmid owner perms bytes nattch status 0x00005feb 0 root 666 12000 4 0x00005fe7 32769 root 666 524288 2 0x00005fe8 65538 root 666 2097152 2 0x00038c0e 131075 root 777 2072 1 0x00038c14 163844 root 777 5603392 0 0x00038c09 196613 root 777 221248 0 [root@tencent64 ~]# free -g total used free shared buffers cached Mem: 126 30 95 0 0 16 -/+ buffers/cache: 14 111 Swap: 2 0 2
删除共享内存后,cache被正常释放了。这个行为与tmpfs的逻辑类似。内核底层在实现共享内存(shm)、消息队列(msg)和信号量数组(sem)这些POSIX:XSI的IPC机制的内存存储时,使用的都是tmpfs。这也是为什么共享内存的操作逻辑与tmpfs类似的原因。当然,一般情况下是shm占用的内存更多,所以我们在此重点强调共享内存的使用。说到共享内存,Linux还给我们提供了另外一种共享内存的方法,就是:
mmap()是一个非常重要的系统调用,这仅从mmap本身的功能描述上是看不出来的。从字面上看,mmap就是将一个文件映射进进程的虚拟内存地址,之后就可以通过操作内存的方式对文件的内容进行操作。但是实际上这个调用的用途是很广泛的。当malloc申请内存时,小段内存内核使用sbrk处理,而大段内存就会使用mmap。当系统调用exec族函数执行时,因为其本质上是将一个可执行文件加载到内存执行,所以内核很自然的就可以使用mmap方式进行处理。我们在此仅仅考虑一种情况,就是使用mmap进行共享内存的申请时,会不会跟shmget()一样也使用cache?
同样,我们也需要一个简单的测试程序:
[root@tencent64 ~]# cat mmap.c #include #include #include #include #include #include #include #include #define MEMSIZE 1024*1024*1023*2 #define MPFILE "./mmapfile" int main() { void *ptr; int fd; fd = open(MPFILE, O_RDWR); if (fd "open()"); exit(1); } ptr = mmap(NULL, MEMSIZE, PROT_READ|PROT_WRITE, MAP_SHARED|MAP_ANON, fd, 0); if (ptr == NULL) { perror("malloc()"); exit(1); } printf("%p\n", ptr); bzero(ptr, MEMSIZE); sleep(100); munmap(ptr, MEMSIZE); close(fd); exit(1); }
这次我们干脆不用什么父子进程的方式了,就一个进程,申请一段2G的mmap共享内存,然后初始化这段空间之后等待100秒,再解除影射所以我们需要在它sleep这100秒内检查我们的系统内存使用,看看它用的是什么空间?当然在这之前要先创建一个2G的文件./mmapfile。结果如下:
[root@tencent64 ~]# dd if=/dev/zero of=mmapfile bs=1G count=2 [root@tencent64 ~]# echo 3 > /proc/sys/vm/drop_caches [root@tencent64 ~]# free -g total used free shared buffers cached Mem: 126 30 95 0 0 16 -/+ buffers/cache: 14 111 Swap: 2 0 2
然后执行测试程序:
[root@tencent64 ~]# ./mmap & [1] 19157 0x7f1ae3635000 [root@tencent64 ~]# free -g total used free shared buffers cached Mem: 126 32 93 0 0 18 -/+ buffers/cache: 14 111 Swap: 2 0 2 [root@tencent64 ~]# echo 3 > /proc/sys/vm/drop_caches [root@tencent64 ~]# free -g total used free shared buffers cached Mem: 126 32 93 0 0 18 -/+ buffers/cache: 14 111 Swap: 2 0 2
我们可以看到,在程序执行期间,cached一直为18G,比之前涨了2G,并且此时这段cache仍然无法被回收。然后我们等待100秒之后程序结束。
[root@tencent64 ~]# [1]+ Exit 1 ./mmap [root@tencent64 ~]# [root@tencent64 ~]# free -g total used free shared buffers cached Mem: 126 30 95 0 0 16 -/+ buffers/cache: 14 111 Swap: 2 0 2
程序退出之后,cached占用的空间被释放。这样我们可以看到,使用mmap申请标志状态为MAP_SHARED的内存,内核也是使用的cache进行存储的。在进程对相关内存没有释放之前,这段cache也是不能被正常释放的。实际上,mmap的MAP_SHARED方式申请的内存,在内核中也是由tmpfs实现的。由此我们也可以推测,由于共享库的只读部分在内存中都是以mmap的MAP_SHARED方式进行管理,实际上它们也都是要占用cache且无法被释放的。
我们通过三个测试例子,发现Linux系统内存中的cache并不是在所有情况下都能被释放当做空闲空间用的。并且也也明确了,即使可以释放cache,也并不是对系统来说没有成本的。总结一下要点,我们应该记得这样几点:
After understanding this, I hope everyone’s understanding of the free command can reach the third level we mentioned. We should understand that the use of memory is not a simple concept, and the cache cannot really be used as free space. If we want to truly and deeply understand whether the memory on your system is being used reasonably, we need to understand a lot of more detailed knowledge and make more detailed judgments about the implementation of related businesses. Our current experimental scenario is a Centos 6 environment. The actual free status of different versions of Linux may be different. You can find out the different reasons by yourself.
Of course, what is described in this article is not all situations where the cache cannot be released. So, in your application scenario, what are the scenarios where the cache cannot be released?
The above is the detailed content of Can the Cache in Linux memory really be recycled?. For more information, please follow other related articles on the PHP Chinese website!