search
HomeOperation and MaintenanceDockerHow to limit memory usage by docker containers

How to limit memory usage by docker containers

Feb 02, 2021 am 11:21 AM
dockerMemorycontainer

How to limit memory usage by docker containers

Docker 容器是一个开源的应用容器引擎,让开发者可以以统一的方式打包他们的应用以及依赖包到一个可移植的容器中,然后发布到任何安装了docker引擎的服务器上(包括流行的Linux机器、windows机器),也可以实现虚拟化。容器是完全使用沙箱机制,相互之间不会有任何接口(类似 iPhone 的 app)。几乎没有性能开销,可以很容易地在机器和数据中心中运行。最重要的是,他们不依赖于任何语言、框架包括系统。

默认情况下容器使用的资源是不受限制的。也就是可以使用主机内核调度器所允许的最大资源。但是在容器的使用过程中,经常需要对容器可以使用的主机资源进行限制,本文介绍如何限制容器可以使用的主机内存。

为什么要限制容器对内存的使用?

限制容器不能过多的使用主机的内存是非常重要的。对于 linux 主机来说,一旦内核检测到没有足够的内存可以分配,就会扔出 OOME(Out Of Memmory Exception),并开始杀死一些进程用于释放内存空间。糟糕的是任何进程都可能成为内核猎杀的对象,包括 docker daemon 和其它一些重要的程序。更危险的是如果某个支持系统运行的重要进程被干掉了,整个系统也就宕掉了!这里我们考虑一个比较常见的场景,大量的容器把主机的内存消耗殆尽,OOME 被触发后系统内核立即开始杀进程释放内存。如果内核杀死的第一个进程就是 docker daemon 会怎么样?结果是没有办法管理运行中的容器了,这是不能接受的!
针对这个问题,docker 尝试通过调整 docker daemon 的 OOM 优先级来进行缓解。内核在选择要杀死的进程时会对所有的进程打分,直接杀死得分最高的进程,接着是下一个。当 docker daemon 的 OOM 优先级被降低后(注意容器进程的 OOM 优先级并没有被调整),docker daemon 进程的得分不仅会低于容器进程的得分,还会低于其它一些进程的得分。这样 docker daemon 进程就安全多了。
我们可以通过下面的脚本直观的看一下当前系统中所有进程的得分情况:

#!/bin/bash
for proc in $(find /proc -maxdepth 1 -regex '/proc/[0-9]+'); do
    printf "%2d %5d %s\n" \
        "$(cat $proc/oom_score)" \
        "$(basename $proc)" \
        "$(cat $proc/cmdline | tr '\0' ' ' | head -c 50)"
done 2>/dev/null | sort -nr | head -n 40

此脚本输出得分最高的 40 个进程,并进行了排序:

How to limit memory usage by docker containers

第一列显示进程的得分,mysqld 排到的第一名。显示为 node server.js 的都是容器进程,排名普遍比较靠前。红框中的是 docker daemon 进程,非常的靠后,都排到了 sshd 的后面。

有了上面的机制后是否就可以高枕无忧了呢!不是的,docker 的官方文档中一直强调这只是一种缓解的方案,并且为我们提供了一些降低风险的建议:

  • 通过测试掌握应用对内存的需求

  • 保证运行容器的主机有充足的内存

  • 限制容器可以使用的内存

  • 为主机配置 swap

好了,啰嗦了这么多,其实就是说:通过限制容器使用的内存上限,可以降低主机内存耗尽时带来的各种风险。

压力测试工具 stress

为了测试容器的内存使用情况,笔者在 ubuntu 的镜像中安装了压力测试工作 stress,并新创建了镜像 u-stress。本文演示用的所有容器都会通过 u-stress 镜像创建(本文运行容器的宿主机为 CentOS7)。下面是创建 u-stress 镜像的 Dockerfile:

FROM ubuntu:latest

RUN apt-get update && \
        apt-get install stress

创建镜像的命令为:

$ docker build -t u-stress:latest .

限制内存使用上限

在进入繁琐的设置细节之前我们先完成一个简单的用例:限制容器可以使用的最大内存为 300M。
-m(--memory=) 选项可以完成这样的配置:

$ docker run -it -m 300M --memory-swap -1 --name con1 u-stress /bin/bash

下面的 stress 命令会创建一个进程并通过 malloc 函数分配内存:

# stress --vm 1 --vm-bytes 500M

通过 docker stats 命令查看实际情况:

How to limit memory usage by docker containers

上面的 docker run 命令中通过 -m 选项限制容器使用的内存上限为 300M。同时设置 memory-swap 值为 -1,它表示容器程序使用内存的受限,而可以使用的 swap 空间使用不受限制(宿主机有多少 swap 容器就可以使用多少)。
下面我们通过 top 命令来查看 stress 进程内存的实际情况:

How to limit memory usage by docker containers

上面的截图中先通过 pgrep 命令查询 stress 命令相关的进程,进程号比较大的那个是用来消耗内存的进程,我们就查看它的内存信息。VIRT 是进程虚拟内存的大小,所以它应该是 500M。RES 为实际分配的物理内存数量,我们看到这个值就在 300M 上下浮动。看样子我们已经成功的限制了容器能够使用的物理内存数量。

限制可用的 swap 大小

强调一下 --memory-swap 是必须要与 --memory 一起使用的。

正常情况下, --memory-swap 的值包含容器可用内存和可用 swap。所以 --memory="300m" --memory-swap="1g" 的含义为:
容器可以使用 300M 的物理内存,并且可以使用 700M(1G -300M) 的 swap。--memory-swap 居然是容器可以使用的物理内存和可以使用的 swap 之和!

把 --memory-swap 设置为 0 和不设置是一样的,此时如果设置了 --memory,容器可以使用的 swap 大小为 --memory 值的两倍。

如果 --memory-swap 的值和 --memory 相同,则容器不能使用 swap。下面的 demo 演示了在没有 swap 可用的情况下向系统申请大量内存的场景:

$ docker run -it --rm -m 300M --memory-swap=300M u-stress /bin/bash
# stress --vm 1 --vm-bytes 500M

How to limit memory usage by docker containers

demo 中容器的物理内存被限制在 300M,但是进程却希望申请到 500M 的物理内存。在没有 swap 可用的情况下,进程直接被 OOM kill 了。如果有足够的 swap,程序至少还可以正常的运行。

我们可以通过 --oom-kill-disable 选项强行阻止 OOM kill 的发生,但是笔者认为 OOM kill 是一种健康的行为,为什么要阻止它呢?

除了限制可用 swap 的大小,还可以设置容器使用 swap 的紧迫程度,这一点和主机的 swappiness 是一样的。容器默认会继承主机的 swappiness,如果要显式的为容器设置 swappiness 值,可以使用 --memory-swappiness 选项。

总结

通过限制容器可用的物理内存,可以避免容器内服务异常导致大量消耗主机内存的情况(此时让容器重启是较好的策略),因此可以降低主机内存被耗尽带来的风险。

相关推荐:docker入门教程

The above is the detailed content of How to limit memory usage by docker containers. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:博客园. If there is any infringement, please contact admin@php.cn delete
Docker vs. Virtual Machines: A ComparisonDocker vs. Virtual Machines: A ComparisonMay 09, 2025 am 12:19 AM

Docker and virtual machines have their own advantages and disadvantages, and the choice should be based on specific needs. 1.Docker is lightweight and fast, suitable for microservices and CI/CD, fast startup and low resource utilization. 2. Virtual machines provide high isolation and multi-operating system support, but they consume a lot of resources and slow startup.

Docker's Architecture: Understanding Containers and ImagesDocker's Architecture: Understanding Containers and ImagesMay 08, 2025 am 12:17 AM

The core concept of Docker architecture is containers and mirrors: 1. Mirrors are the blueprint of containers, including applications and their dependencies. 2. Containers are running instances of images and are created based on images. 3. The mirror consists of multiple read-only layers, and the writable layer is added when the container is running. 4. Implement resource isolation and management through Linux namespace and control groups.

The Power of Docker: Containerization ExplainedThe Power of Docker: Containerization ExplainedMay 07, 2025 am 12:07 AM

Docker simplifies the construction, deployment and operation of applications through containerization technology. 1) Docker is an open source platform that uses container technology to package applications and their dependencies to ensure cross-environment consistency. 2) Mirrors and containers are the core of Docker. The mirror is the executable package of the application and the container is the running instance of the image. 3) Basic usage of Docker is like running an Nginx server, and advanced usage is like using DockerCompose to manage multi-container applications. 4) Common errors include image download failure and container startup failure, and debugging skills include viewing logs and checking ports. 5) Performance optimization and best practices include mirror optimization, resource management and security improvement.

Kubernetes and Docker: Deploying and Managing Containerized AppsKubernetes and Docker: Deploying and Managing Containerized AppsMay 06, 2025 am 12:13 AM

The steps to deploy containerized applications using Kubernetes and Docker include: 1. Build a Docker image, define the application image using Dockerfile and push it to DockerHub. 2. Create Deployment and Service in Kubernetes to manage and expose applications. 3. Use HorizontalPodAutoscaler to achieve dynamic scaling. 4. Debug common problems through kubectl command. 5. Optimize performance, define resource limitations and requests, and manage configurations using Helm.

Docker: An Introduction to Containerization TechnologyDocker: An Introduction to Containerization TechnologyMay 05, 2025 am 12:11 AM

Docker is an open source platform for developing, packaging and running applications, and through containerization technology, solving the consistency of applications in different environments. 1. Build the image: Define the application environment and dependencies through the Dockerfile and build it using the dockerbuild command. 2. Run the container: Use the dockerrun command to start the container from the mirror. 3. Manage containers: manage container life cycle through dockerps, dockerstop, dockerrm and other commands.

Docker and Linux: Building Portable ApplicationsDocker and Linux: Building Portable ApplicationsMay 03, 2025 am 12:17 AM

How to build portable applications with Docker and Linux? First, use Dockerfile to containerize the application, and then manage and deploy the container in a Linux environment. 1) Write a Dockerfile and package the application and its dependencies into a mirror. 2) Build and run containers on Linux using dockerbuild and dockerrun commands. 3) Manage multi-container applications through DockerCompose and define service dependencies. 4) Optimize the image size and resource configuration, enhance security, and improve application performance and portability.

Docker and Kubernetes: The Power of Container OrchestrationDocker and Kubernetes: The Power of Container OrchestrationMay 02, 2025 am 12:06 AM

Docker and Kubernetes improve application deployment and management efficiency through container orchestration. 1.Docker builds images through Dockerfile and runs containers to ensure application consistency. 2. Kubernetes manages containers through Pod, Deployment and Service to achieve automated deployment and expansion.

Docker vs. Kubernetes: Key Differences and SynergiesDocker vs. Kubernetes: Key Differences and SynergiesMay 01, 2025 am 12:09 AM

Docker and Kubernetes are leaders in containerization and orchestration. Docker focuses on container lifecycle management and is suitable for small projects; Kubernetes is good at container orchestration and is suitable for large-scale production environments. The combination of the two can improve development and deployment efficiency.

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Atom editor mac version download

Atom editor mac version download

The most popular open source editor

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft