Principles that docker images should follow: 1. Image minimization principle; it is necessary to select the most streamlined basic image, clean up the intermediate products of image construction, and reduce the number of image layers. 2. The principle of maximizing the build speed; make full use of the image to build the cache, and then use the built cache to speed up the image build. 3. Pay attention to optimizing network requests.
The operating environment of this tutorial: linux5.9.8 system, docker-1.13.1 version, Dell G3 computer.
1. Why do we need to optimize the image?
As we continue to use the docker image, if we do not pay attention and optimize it during the process, the size of the image will become larger and larger
Many times when we use docker to deploy applications, we will find that the size of the image is at least 1G.
The increase in the size of the image will not only increase the cost of disk resources and network resources, but also affect the deployment efficiency of the application. The deployment time of the application will become longer and longer
Therefore, we need to reduce the size of the deployment image to speed up the deployment efficiency and reduce the resource overhead
As for the optimization of the image, it can be achieved by optimizing the dockerfile
2. Several principles for building images
(1) Image minimization principle
Choose the most streamlined basic image
Choose the smallest base image to effectively reduce the image size. Such as: alpine, busybox, etc.
Clean up the intermediate products of image construction
During the process of building the image, when the dockerfile instructions are executed, there is no need to delete the image. 's file.
If you use yum to install components, you can finally use yum clean all image to clean up unnecessary files or use the system rm command to delete unnecessary source files, etc.
Reduce the number of layers of the image
The image is a hierarchically stored file, and the image also has a certain limit on the number of layers. The current image has the highest layer number. It is layer 127.
If you don’t pay more attention, the image will become more and more bloated.
When using a dockerfile to build an image, each instruction in the dockerfile will generate a layer.
Therefore, you can reduce the number of layers in the final generated image by merging the mergeable instructions in the dockerfile.
For example: when using RUN to execute a shell command in a dockerfile, you can use "&&" to connect multiple commands.
Use the most basic image
,
The smaller the image, the more streamlined it is
(2) The principle of maximizing the build speed
Make full use of the image build cache
We can use The built cache is used to speed up the image construction. Docker build will enable the cache by default. There are three key points for the cache to take effect.
The mirror parent layer has not changed, the build instructions remain unchanged, and the checksum of the added file is consistent.
As long as a build instruction meets these three conditions, this layer of image building will not be executed again, and it will directly use the results of the previous build.
After the image cache of a certain layer becomes invalid, the cache of the subsequent image layers will become invalid.
We should put the least changed part at the front of the Dockerfile so that we can make full use of the image cache.
There are commands WORKDIR, CMD, ENV, ADD, etc. in the dockerfile that may cause cache invalidation.
It is best to put these commands at the bottom of the dockerfile to maximize the use of the cache during the process of building the image. .
Delete unnecessary files in the build directory (default: the directory where the Dockerfile is located)
Write a .dockerignore file to filter unnecessary files during the build process or create a separate directory, and in the directory Only the files needed during the image building process exist.
Docker is divided into Docker engine (that is, server-side daemon) and client tools at runtime.
Docker's engine provides a set of REST APIs, called Docker Remote API,
And client tools such as docker commands interact with the Docker engine through this set of APIs. Thus completing various functions.
So, although on the surface it seems that we are executing various docker functions locally, in fact, everything is done on the server side (Docker engine) using remote calling. The docker build command builds the image. In fact, it is not built locally, but on the server, that is, in the Docker engine.
When building an image, Docker needs to prepare the context first and collect all required files into the process.
The default context includes all files in the Dockerfile directory.
(3) Pay attention to optimizing network requests
When we use some mirror sources or use urls on the Internet in dockerfile,
use some A relatively good open source site on the Internet can save time and reduce the failure rate.
3. Simulate the source code in the virtual machine to compile nginx
选择最精简的基础镜像 减少镜像的层数 清理镜像构建的中间产物 注意优化网络请求 尽量去用构建缓存
Start docker
:
View the image and delete it Useless image
:
First compile nginx from the source code. After you are familiar with the steps, you can run nginx in the container
:
##Close debug:
#View execution command
: 4. Optimization of the image
Phase construction of the image Next, we build the container with the rhel7 image and install the nginx source code package in the container. Build a new image with this container and optimize it
(1) Pass two packages to server1 on the real machineOptimization idea: Put RUN in one line and reduce the number of mirror layers
:Write the Dockerfile as follows
##Optimization idea: Use multi-stage build :
Dokcerfile As follows:
First simulate the command line to turn off debug:
:
首先我们需要导入一个distroless和nginx镜像 distroless”镜像只包含应用程序及其运行时依赖项,不包含程序包管理器、shell以及在标准Linux发行版中可以找到的任何其他程序 用distroless去除容器中所有不必要的东西
1)从github网站查看例子:
(2)从真机给server1发送东西
(3)导入镜像
(4)编写Dockerfile如下
(5)构建镜像并查看镜像大小
(6)构建容器并测试
查看IP并能正常访问到Nginx默认发布页,证明容器镜像可以正常使用,但只要内网可以访问:
按照查看桥接的工具:
查看桥接:
做端口映射
可以通过外网访问了:
推荐学习:《docker视频教程》
The above is the detailed content of What principles should docker images follow?. For more information, please follow other related articles on the PHP Chinese website!

docker中rm和rmi的区别:rm命令用于删除一个或者多个容器,而rmi命令用于删除一个或者多个镜像;rm命令的语法为“docker rm [OPTIONS] CONTAINER [CONTAINER...]”,rmi命令的语法为“docker rmi [OPTIONS] IMAGE [IMAGE...]”。

docker官方镜像有:1、nginx,一个高性能的HTTP和反向代理服务;2、alpine,一个面向安全应用的轻量级Linux发行版;3、busybox,一个集成了三百多个常用Linux命令和工具的软件;4、ubuntu;5、PHP等等。

docker对于小型企业、个人、教育和非商业开源项目来说是免费的;2021年8月31日,docker宣布“Docker Desktop”将转变“Docker Personal”,将只免费提供给小型企业、个人、教育和非商业开源项目使用,对于其他用例则需要付费订阅。

docker容器重启后数据会丢失的;但是可以利用volume或者“data container”来实现数据持久化,在容器关闭之后可以利用“-v”或者“–volumes-from”重新使用以前的数据,docker也可挂载宿主机磁盘目录,用来永久存储数据。

docker能安装oracle。安装方法:1、拉取Oracle官方镜像,可以利用“docker images”查看镜像;2、启动容器后利用“docker exec -it oracle11g bash”进入容器,并且编辑环境变量;3、利用“sqlplus /nolog”进入oracle命令行即可。

解决方法:1、停止docker服务后,利用“rsync -avz /var/lib/docker 大磁盘目录/docker/lib/”将docker迁移到大容量磁盘中;2、编辑“/etc/docker/daemon.json”添加指定参数,将docker的目录迁移绑定;3、重载和重启docker服务即可。

容器管理ui工具有:1、Portainer,是一个轻量级的基于Web的Docker管理GUI;2、Kitematic,是一个GUI工具,可以更快速、更简单的运行容器;3、LazyDocker,基于终端的一个可视化查询工具;4、DockStation,一款桌面应用程序;5、Docker Desktop,能为Docker设置资源限制,比如内存,CPU,磁盘镜像大小;6、Docui。

AUFS是docker最早支持的存储引擎。AUFS是一种Union File System,是文件级的存储驱动,是Docker早期用的存储驱动,是Docker18.06版本之前,Ubuntu14.04版本前推荐的,支持xfs、ext4文件。


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

SublimeText3 Chinese version
Chinese version, very easy to use

Dreamweaver Mac version
Visual web development tools

WebStorm Mac version
Useful JavaScript development tools

Notepad++7.3.1
Easy-to-use and free code editor

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.
