


Detailed introduction to docker data volume management & convoy volume plug-in (detailed examples)
This article brings you relevant knowledge about data volume management and convoy volume plug-in in docker. I hope it will be helpful to you.
1. What is a Docker data volume
A data volume is a specially designated directory in one or more containers, which can Bypass union file system.
Volumes are designed for data persistence and are independent of the container's life cycle.Therefore, Docker will not automatically delete the data volume when the container is deleted, nor will it actively "garbage collect" volumes that are no longer used by the container.
The existence of the data volume is to make the data of the container persist, and to achieve data sharing between containers.
In layman's terms, the docker container data volume can be regarded as a USB disk commonly used in our lives. It exists in one or more containers and is mounted to the container by docker. , but does not belong to the union file system, Docker will not delete its mounted data volume when the container is deleted.
2. Why use data volumes
docker layered file system:
- Poor performance
- life cycle Same as the container
docker data volume:
- mount to the host, bypassing the hierarchical file system
- The performance is the same as the host disk, the container is deleted Still retained after
- Only local disk, cannot be migrated with the container
3. Docker data volume provides two volumes
bind mount
is to mount the directory or file on the host into the container.
- is intuitive, efficient and easy to understand.
- Use the -v option to specify the path, format:
- bind mount The default permission is read and write rw, you can specify read-only ro when mounting. If the path specified by the
- -v option does not exist, it will be automatically created when mounting.
docker managed volume
bind mount must specify the host file system path, which limits portability.
Docker managed volume does not need to specify the mount source
Comparison between bind mount and docker managed volume
Same points: both are paths in the host file system
Differences:
4.bind mount application
docker network prune docker network ls docker run -d --name vm1 -v /opt/website:/usr/share/nginx/html nginx docker ps
docker inspect vm1 #查看到ip为172.17.0.2 curl 172.17.0.2
Found a 403 page:
cd /opt/website/ ls #发现没有默认发布页面 echo www.westos.org > index.html curl 172.17.0.2
When you visit nginx at this time, you will see the content of www.westos.org:
You can also specify permissions when mounting:
docker run -it --rm -v /opt/website:/data1 -v /etc/passwd:/data2/passwd:ro busybox
You can view it The default permission is rw read and write, so we can change the content of index.html; and after specifying passwd as read-only, we cannot modify its content and can only read
docker volume lsSometimes we have residual managed volumes after deleting the container. In this case, we need to clean it up, otherwise It will occupy our resources:
docker volume prune docker volume ls
docker run -d --name registry registry cd /var/lib/docker/volumes/ ls docker history registry:latest
You can copy the contents of the container to the mount point through docker volume:
docker run -d --name vm2 -v /usr/share/nginx/html nginx cd /var/lib/docker/volumes/ ls cd 674c999f99b7b524d8f5769b65cb5411d11e3fa855da695a5fdd3494e4342d89/ cd _data/ ls #查看到默认发布目录被复制到了这里
docker inspect vm2 curl 172.17.0.3 #nginx默认发布页
echo hello docker! > index.html curl 172.17.0.3 #可以直接在挂载的目录修改默认发布页
docker volume is used by default It is a local type driver and can only exist on the host machine. Cross-host volumes require the use of third-party drivers
. You can view the link: https://docs.docker.com/engine/extend /legacy_plugins/#volume-plugins
Docker Plugin 是以Web Service的服务运行在每一台Docker Host上的,通过HTTP协议传输RPC风格的JSON数据完成通信。Plugin的启动和停止,并不归Docker管理,Docker Daemon依靠在缺省路径下查找Unix Socket文件,自动发现可用的插件。
当客户端与Daemon交互,使用插件创建数据卷时,Daemon会在后端找到插件对应的 socket 文件,建立连接并发起相应的API请求,最终结合Daemon自身的处理完成客户端的请求。
7.convoy卷插件
convoy卷插件支持三种运行方式:devicemapper、NFS、EBS。下面的实验以nfs的运行方式来演示
实验目的:在server1和2底层用nfs来实现数据共享
step1 首先在server1和server2上搭建nfs文件系统:
server1:
yum install -y nfs-utils systemctl start rpcbind mkdir /nfs #创建共享目录 chmod 777 /nfs #修改共享目录权限 vim /etc/exports #编辑共享目录文件,否则将不会被共享出去 /nfs *(rw,no_root_squash) systemctl start nfs
注意:rpcbind服务必须是开启的。这是因为:他是一个RPC服务,主要是在nfs共享时候负责通知客户端,服务器的nfs端口号的。简单理解rpc就是一个中介服务。
server2:
yum install -y nfs-utils systemctl start nfs-server.service showmount -e server1 #寻找server1的挂载目录 mkdir /nfs mount server1:/nfs /nfs df
测试:
在server2中:
cd /nfs/ touch file
在server1中:
cd /nfs/ ls #查看到file
说明两个节点的/nfs实现同步了
step2 配置convoy环境:
docker官方只提供了卷插件的api,开发者可以根据实际需求定制卷插件驱动。
在server1中:
tar zxf convoy.tar.gz cd convoy/ cp convoy* /usr/local/bin/ #将二进制文件加入到PATH路径 mkdir /etc/docker/plugins #创建docker的插件目录 convoy daemon --drivers vfs --driver-opts vfs.path=/nfs &> /dev/null & cd /nfs ls
注意:第一次运行上面的convoy daemon命令的时候,会在/nfs目录下生成一个config文件夹,这个文件夹不要删除,不然客户端的convoy命令就会用不了
echo "unix:///var/run/convoy/convoy.sock" > /etc/docker/plugins/convoy.spec #将convoy守护进程开启生成的.sock文件放入/etc/docker/plugins目录下的convoy.spec文件中,docker就可以识别。(其中convoy.spec文件之前是不存在的) cat /etc/docker/plugins/convoy.spec
在server2中同样配置convoy环境:
scp -r server1:convoy . cd convoy/ cp convoy* /usr/local/bin/ #将二进制文件加入到PATH路径 mkdir /etc/docker/plugins #创建docker的插件目录 echo "unix:///var/run/convoy/convoy.sock" > /etc/docker/plugins/convoy.spec convoy daemon --drivers vfs --driver-opts vfs.path=/nfs &> /dev/null & cd /nfs ls
step3 创建卷:
docker volume ls convoy create vol1
step4 操作卷:
在server2中运行容器,指定卷为刚才新创建的vol1:
docker run -it --name vm1 -v vol1:/usr/share/nginx/html nginx docker ps docker inspect vm1 curl 172.17.0.2 #nginx默认发布页
cd /nfs/ cd vol1/ echo hello convoy > index.html curl 172.17.0.2
在server1中数据也同步了:
cd /nfs/ cd vol1/ cat index.html
在ser1中也可以运行容器,也可以用到共享的数据卷
说明:docker引擎默认扫描 /etc/docker/plugins目录中的convoy.spec—>访问/run/convoy/convoy.sock文件—>发起响应的api请求—>把数据写入vol1中—>底层通过nfs进行主机间的数据同步
如何删除通过nfs创建的数据卷,让之后创建的数据卷都是本地的呢?
删除卷:
convoy delete vol1
实现本地驱动:
cd /etc/docker/plugins/ mv convoy.spec /mnt systemctl restart docker
创建卷:
docker volume create vol1 ls cd volumes/ ls #可以看到vol1,默认创建在这个目录下 cd vol1/ ls cd _data/ ls #进入该目录,是空的
使用卷:
docker run -d --name vm1 -v vol1:/usr/share/nginx/html nginx docker ps ls #看到nginx默认发布目录被挂到这里了
补充几条命令:
docker container prune #删除停止的容器 docker volume prune #删除没有被使用的卷
推荐学习:《docker视频教程》
The above is the detailed content of Detailed introduction to docker data volume management & convoy volume plug-in (detailed examples). For more information, please follow other related articles on the PHP Chinese website!

Docker is important on Linux because Linux is its native platform that provides rich tools and community support. 1. Install Docker: Use sudoapt-getupdate and sudoapt-getinstalldocker-cedocker-ce-clicotainerd.io. 2. Create and manage containers: Use dockerrun commands, such as dockerrun-d--namemynginx-p80:80nginx. 3. Write Dockerfile: Optimize the image size and use multi-stage construction. 4. Optimization and debugging: Use dockerlogs and dockerex

Docker is a containerization tool, and Kubernetes is a container orchestration tool. 1. Docker packages applications and their dependencies into containers that can run in any Docker-enabled environment. 2. Kubernetes manages these containers, implementing automated deployment, scaling and management, and making applications run efficiently.

The purpose of Docker is to simplify application deployment and ensure that applications run consistently in different environments through containerization technology. 1) Docker solves the environmental differences problem by packaging applications and dependencies into containers. 2) Create images using Dockerfile to ensure that the application runs consistently anywhere. 3) Docker's working principle is based on images and containers, and uses the namespace and control groups of the Linux kernel to achieve isolation and resource management. 4) The basic usage includes pulling and running images from DockerHub, and the advanced usage involves managing multi-container applications using DockerCompose. 5) Common errors such as image building failure and container failure to start, you can debug through logs and network configuration. 6) Performance optimization construction

The methods of installing and using Docker on Ubuntu, CentOS, and Debian are different. 1) Ubuntu: Use the apt package manager, the command is sudoapt-getupdate&&sudoapt-getinstalldocker.io. 2) CentOS: Use the yum package manager and you need to add the Docker repository. The command is sudoyumininstall-yyum-utils&&sudoyum-config-manager--add-repohttps://download.docker.com/lin

Using Docker on Linux can improve development efficiency and simplify application deployment. 1) Pull Ubuntu image: dockerpullubuntu. 2) Run Ubuntu container: dockerrun-itubuntu/bin/bash. 3) Create Dockerfile containing nginx: FROMubuntu;RUNapt-getupdate&&apt-getinstall-ynginx;EXPOSE80. 4) Build the image: dockerbuild-tmy-nginx. 5) Run container: dockerrun-d-p8080:80

Docker simplifies application deployment and management on Linux. 1) Docker is a containerized platform that packages applications and their dependencies into lightweight and portable containers. 2) On Linux, Docker uses cgroups and namespaces to implement container isolation and resource management. 3) Basic usages include pulling images and running containers. Advanced usages such as DockerCompose can define multi-container applications. 4) Debug commonly used dockerlogs and dockerexec commands. 5) Performance optimization can reduce the image size through multi-stage construction, and keeping the Dockerfile simple is the best practice.

Docker is a Linux container technology-based tool used to package, distribute and run applications to improve application portability and scalability. 1) Dockerbuild and dockerrun commands can be used to build and run Docker containers. 2) DockerCompose is used to define and run multi-container Docker applications to simplify microservice management. 3) Using multi-stage construction can optimize the image size and improve the application startup speed. 4) Viewing container logs is an effective way to debug container problems.

Docker container startup steps: Pull the container image: Run "docker pull [mirror name]". Create a container: Use "docker create [options] [mirror name] [commands and parameters]". Start the container: Execute "docker start [Container name or ID]". Check container status: Verify that the container is running with "docker ps".


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

Dreamweaver Mac version
Visual web development tools

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software