In docker, a data volume is a special directory on the host that can be used by one or more containers. It can be shared and reused between containers. It is more efficient to transfer data between local and containers; Modifications to the volume will be effective immediately, and the data volume can be modified both inside the container and in the local directory.
The operating environment of this tutorial: linux5.9.8 system, docker-1.13.1 version, Dell G3 computer.
What is a docker data volume?
Data volume (Data Volumes) is a directory or file in the host. The design purpose of the data volume It is the persistence of data, which is completely independent of the life cycle of the container, so Docker will not delete its mounted data volume when the container is deleted. When the container directory and the data volume directory are bound, each other's modifications will be synchronized immediately. One data volume can be mounted by multiple containers at the same time, and one container can also be mounted with multiple data volumes.
Data volume features
Data volumes can be shared and reused between containers, and data transfer between local and containers is more efficient
Modifications to the data volume will be effective immediately. The data volume can be modified inside the container or in the local directory
Updates to the data volume will not Affects the image and decouples data and applications
The volume will always exist until no container is used
1. Docker data Mounting to the container
In Docker, in order to achieve data persistence (the so-called data persistence of Docker means that the data does not end with the end of the Container) , data needs to be mounted from the host to the container. Currently, Docker provides three different ways to mount data from the host into the container: (1) volumes: Docker manages a part of the host file system, which is located in /var/lib/docker/volumes by default. Directory; (
The most commonly used method)
As you can see from the above figure, all Container data are currently stored in this directory. Since There is no volume specified when creating, so Docker helps us create many anonymous (just the bunch of long ID names above) volumes by default.
(2) bind mounts: means it can be stored in any location of the host system; (
commonly used method) However, bind mounts are in different The host system is not portable. For example, the directory structures of Windows and Linux are different, and the host directories pointed to by bind mount cannot be the same. This is also the reason why bind mount cannot appear in the Dockerfile, because then the Dockerfile is not portable.
(3) tmpfs: The mount is stored in the memory of the host system and will not be written to the host's file system; (
A method that is generally not used) The schematic diagram of the three methods is as follows:
2.1 Managing volumes# docker volume create edc-nginx-vol // 创建一个自定义容器卷
# docker volume ls // 查看所有容器卷
# docker volume inspect edc-nginx-vol // 查看指定容器卷详情信息
For example, here we create a custom container volume named "edc-nginx-vol":
2.2 Create a container using the specified volumeWith the custom container volume, we can create a container using this data volume. Here we Take nginx as an example:
# docker run -d -it --name=edc-nginx -p 8800:80 -v edc-nginx-vol:/usr/share/nginx/
Among them, -v represents mounting the data volume. Here, the custom data volume edc-nginx-vol is used, and the data volume is mounted to /usr/share/nginx/html ( This directory is the default web directory for yum to install nginx).
If -v is not specified, Docker will help us create anonymous data volumes for mapping and mounting by default.
After creating the container, we can enter the container and take a look:
We can see that there are two default pages. At this time we start a new SSH Connect to the host and look inside the data volume just created:
可以看到,我们可以访问到容器里面的两个默认页面,由此可知,volume帮我们做的类似于一个软链接的功能。在容器里边的改动,我们可以在宿主机里感知,而在宿主机里面的改动,在容器里边可以感知到。
这时,如果我们手动stop并且remove当前nginx容器,我们会发现容器卷里面的文件还在,并没有被删除掉。
由此可以验证,在数据卷里边的东西是可以持久化的。如果下次还需要创建一个nginx容器,那么还是复用当前数据卷里面的文件。
此外,我们还可以启动多个nginx容器实例,并且共享同一个数据卷,复用性和扩展性较强。
2.3 清理卷
如果不再使用自定义数据卷了,那么可以手动清理掉:
# docker stop edc-nginx // 暂停容器实例 # docker rm edc-nginx // 移除容器实例 # docker volume rm edc-nginx-vol // 删除自定义数据卷
三、Bind Mounts的基本使用
3.1 使用卷创建一个容器
docker run -d -it --name=edc-nginx -v /app/wwwroot:/usr/share/nginx/html nginx
这里指定了将宿主机上的 /app/wwwroot 目录(如果没有会自动创建)挂载到 /usr/share/nginx/html (这个目录是yum安装nginx的默认网页目录)。
这时我们再次进入容器内部看看:
可以看到,与volumes不同,bind mounts的方式会隐藏掉被挂载目录里面的内容(如果非空的话),这里是/usr/share/nginx/html 目录下的内容被隐藏掉了,因此我们看不到。
但是,我们可以将宿主机上的文件随时挂载到容器中:
Step1.新建一个index.html
Step2.在容器中查看
3.2 验证绑定
docker inspect edc-nginx
通过上述命令可以看到一大波配置,我们要关注的是:
3.3 清理
docker stop edc-nginx docker rm edc-nginx
同volumes一样,当我们清理掉容器之后,挂载目录里面的文件仍然还在,不会随着容器的结束而消失,从而实现数据持久化。
3.4 应用案例
在服务治理组件中,服务发现组件是一个最常用的组件之一,Consul是一个流行的服务发现开源项目,Consul推荐我们使用配置文件的方式注册服务信息。因此,我们常常会将填写好服务注册配置文件放在宿主机的一个文件目录下将其挂载到Consul的容器指定目录下,如下所示:
docker run -d -p : --restart=always \ -v /XiLife/consul/data/server1:/consul/data -v /XiLife/consul/conf/server1:/consul/config \ -e CONSUL_BIND_INTERFACE= --privileged= \ --name=consul_server_1 consul:. agent -server -bootstrap-expect= -ui -node=consul_server_1 -client= \ -data- /consul/data -config- /consul/config -datacenter=xdp_dc;
可以看到,我们通过Bind Mounts的方式将宿主机上的/XiLife/consul/data/server1目录挂载到了容器的/consul/data目录下,还将/XiLife/consul/conf/server1目录挂载到了容器的/consul/config目录下,而容器下的两个目录/consul/data和/consul/config则是我们指定的存放agent数据和配置文件的地方。因此,宿主机上的配置文件的变化会及时反映到容器中,比如我们在宿主机上的目录下更新了配置文件,那么只需要reload一下Consul的容器实例即可:
docker exec consul-server consul reload
*.这里的consul-server是容器的名字,consul reload是重新加载的命令(非restart)。
推荐学习:《docker视频教程》
The above is the detailed content of What is a docker data volume?. For more information, please follow other related articles on the PHP Chinese website!

The steps to deploy containerized applications using Kubernetes and Docker include: 1. Build a Docker image, define the application image using Dockerfile and push it to DockerHub. 2. Create Deployment and Service in Kubernetes to manage and expose applications. 3. Use HorizontalPodAutoscaler to achieve dynamic scaling. 4. Debug common problems through kubectl command. 5. Optimize performance, define resource limitations and requests, and manage configurations using Helm.

Docker is an open source platform for developing, packaging and running applications, and through containerization technology, solving the consistency of applications in different environments. 1. Build the image: Define the application environment and dependencies through the Dockerfile and build it using the dockerbuild command. 2. Run the container: Use the dockerrun command to start the container from the mirror. 3. Manage containers: manage container life cycle through dockerps, dockerstop, dockerrm and other commands.

How to build portable applications with Docker and Linux? First, use Dockerfile to containerize the application, and then manage and deploy the container in a Linux environment. 1) Write a Dockerfile and package the application and its dependencies into a mirror. 2) Build and run containers on Linux using dockerbuild and dockerrun commands. 3) Manage multi-container applications through DockerCompose and define service dependencies. 4) Optimize the image size and resource configuration, enhance security, and improve application performance and portability.

Docker and Kubernetes improve application deployment and management efficiency through container orchestration. 1.Docker builds images through Dockerfile and runs containers to ensure application consistency. 2. Kubernetes manages containers through Pod, Deployment and Service to achieve automated deployment and expansion.

Docker and Kubernetes are leaders in containerization and orchestration. Docker focuses on container lifecycle management and is suitable for small projects; Kubernetes is good at container orchestration and is suitable for large-scale production environments. The combination of the two can improve development and deployment efficiency.

Docker and Linux are perfect matches because they can simplify the development and deployment of applications. 1) Docker uses Linux's namespaces and cgroups to implement container isolation and resource management. 2) Docker containers are more efficient than virtual machines, have faster startup speeds, and the mirrored hierarchical structure is easy to build and distribute. 3) On Linux, the installation and use of Docker is very simple, with only a few commands. 4) Through DockerCompose, you can easily manage and deploy multi-container applications.

The difference between Docker and Kubernetes is that Docker is a containerized platform suitable for small projects and development environments; Kubernetes is a container orchestration system suitable for large projects and production environments. 1.Docker simplifies application deployment and is suitable for small projects with limited resources. 2. Kubernetes provides automation and scalability capabilities, suitable for large projects that require efficient management.

Use Docker and Kubernetes to build scalable applications. 1) Create container images using Dockerfile, 2) Deployment and Service of Kubernetes through kubectl command, 3) Use HorizontalPodAutoscaler to achieve automatic scaling, thereby building an efficient and scalable application architecture.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

Zend Studio 13.0.1
Powerful PHP integrated development environment

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment
