


The most systematic mastery of Docker core technology (summary sharing)
This article brings you some related questions about container operation of docker core technology, detailed explanation of Dockerfile, etc. I hope it will be helpful to you.
1. Docker
1. Introduction
- Based on the Cgroup, Namespace, and Union FS technologies of the Linux kernel, Encapsulating and isolating processes is a virtual technology at the operating system level. Since the isolated process is independent of the host and other isolated processes, it is called a container.
- The initial implementation was based on LXC. LXC will be removed from 0.7 onwards. Switch to the self-developed Libcontainer. Starting from 1.11, it has further evolved to use runC and Containerd
- Docker has further encapsulated it on the basis of the container, from file system, network interconnection to process isolation, etc. , which greatly simplifies the creation and maintenance of containers, making Docker technology lighter and faster than virtual machine technology
2. Docker advantages
- Use it more efficiently System resources
- Faster startup time
- Consistent operating environment
- Continuous delivery and deployment
- Easier migration
- More Easily maintain and expand
3. Comparison between Docker and virtual machines
##3. Container operation
- Start:
- docker run:
-It interaction
-d Running
--P port mapping
--V disk hanging
- Start the terminated container
- Stop container
- View container process
- ##View container details
-
docker cp file1
:/file_to_path
#docker exits the container without closing the container: ctrl q p
docker exits the container and closes the container: exit
##Query all docker images
docker images
- Docker hub: https://hub.docker.com
Create a private image warehouse: docker run -d -p 5000:5000 registry
Four. Dockerfile Detailed explanation
- From: Specify the basic mirror image, must be the first instructions
- : Format:
From & LT; Image & GT;@& LT; Digest & GT;
## Example:## Ubuntu
AINTAINER: Maintenance Information# This #Format: MAINTAINER
RUN: Command executed when building the image
Format: Shell execution: RUNRUN apt-get update && apt The two commands -get install are always connected with &&, otherwise the apt-get update build layer will be cached, which will cause the new package to fail to be installed
ADD: Add local files to the container, Types such as tar will automatically decompress and you can access network resources, similar to wget
Format: ADDCOPY: The function is similar to ADD, but it does not decompress files and cannot access network resources
Use multi-stage in Dockerfile: multi-stage in Dockerfile (multi-stage build) - sparkdev - Blog Park Format: COPYCMD: Called after the container is built, that is, it is called only when the container starts
Format: CMD ["executable","param1","param2"] (Execute executable file, priority) CMD ["param1","param2"] ( If ENTRYPOINT is set, call ENTRYPOINT directly to add parameters)
CMD command param1 param2 (execute shell internal command)
ENTRTPOINT: Configure the container to make it executable.
Format: ENTRYPOINT ["executable", "param1", "param2"] (executable file, priority) ENTRYPOINT command param1 param2 (shell internal command)
LABAL: Used to add source data to the image
Format: LABELENV: Set environment variables
Format: ENVEXPOSE: Specify external interaction Port
Format: EXPOSEVOLUME: Used to specify the persistence directory
Format: VOLUME [USER: Specify the run The user name or UID of the container, and subsequent RUN will also use the specified user.
Format: USER user
USER user:group
USER uid
USER uid:gid
USER user:gid
USER uid:group
USER www
ARG: Used to specify variables passed to the build runtime
Format: ARG [=
Example:
ARG build_user=ribbon
##
5. Detailed explanation of Linux NameSpace
Detailed explanation of NamesSpace:
Linux NameSpace_Frank_Abagnale's Blog-CSDN Blog This article provides a more detailed introduction. You can refer to this article
Common operations of NameSpace
- to view the namespace of the current system:
lsns -t
- View the namespace of a process:
ls -la /proc/
- View a namespace running command
nsenter -t
6. Detailed explanation of Linux Cgroups
Detailed explanation of Cgroups
Container Core: cgroups - Brief Book You can refer to this article to learn more
Simulate Cgroups to control CPU resources
Pass Simulate to better familiarize yourself with the effect of Cgroups controlling resources. First create the cpudemo folder
Execute top and you can see that busyloop takes up two CPU resources
Add the process to the cgroup process configuration group
Set cpuquota
You can see that success will occupy 200% The CPU resources are reduced to 1%
Simulating Cgroups exceeding the limit memory resources and being killed by OOM
/ Create the memorydemo folder in the sys/fs/cgroup/memory directory
Run the memory-consuming program and use watch to query the memory usage
Configure the process into the cgroups configuration group
Set the maximum memory size
Waiting for the program Killed by OOM, dmesg can see the kill information
Note: To delete self-created cgroup folders, you need to use cgroup-tools
7. Union FS
The technologies used by Docker are all derived from Linux technologies and are not There is no innovation, and the innovation of Docker is the file system.
1. Concept:
- File system that mounts different directories under the same virtual file system
- Supports setting readonly, readwrite and without for each member directory -able permissions
- File system layering, the directory with readonly permissions can be logically modified. The modifications here are incremental and do not affect the readonly part
- Usual Union FS uses: Multiple disks are mounted to the same directory, and the other is to combine the readonly part and the writeable directory
2. Illustration of Union FS
In the design of the Docker image, The concept of layer is introduced, that is to say, every step of the user's image creation operation will generate a layer, that is, an incremental rootfs (a directory), so that the containers where application A and application B are located jointly reference the same The ubuntu operating system layer and the Golang environment layer (as read-only layers) each have their own application layer and writable layer. When starting the container, mount the relevant layers to a directory through UnionFS as the root file system of the container.
3. Container storage driver
4. Simulate Union FS to better understand the effect
Since the current version of docker uses the overlayFS storage driver, we use the overlay mounting method to conduct experiments. Overlayfs passes through three directories: lower Directory, upper directory, and work directory are implemented. There can be multiple lower directories. The work directory is the basic working directory. After mounting, the content will be cleared and its content will not be visible to the user during use. Finally, the joint mounting is completed. The unified view presented to the user is called the merged directory.
Execute the following command:
mkdir upper lower merged work echo "lower" > lower/in_lower.txt echo "from lower" > lower/in_both.txt echo "from upper" > upper/in_both.txt echo "upper" > upper/in_upper.txt path=$(pwd) mount -t overlay overlay -o lowerdir=${path}/lower,upperdir=${path}/upper,workdir=${path}/work ${path}/merged
## You can see that the overlay storage driver file is mounted using Effect. After the experiment is completed, you need to restore the environment after umounting the merged directory, and then deleting the four directories. If you delete the others first, rm: cannot remove 'merged/': Device or resource busy may appear, resulting in the merged directory not being deleted.
- Query the built-in docker Network mode
- docker run Select the network mode to run
# 2) none mode: use --net=none specified. The network configuration needs to be configured by yourself
3) Bridge mode: Use --net=bridge to specify, the default setting.
docker network logic diagram bridge and NAT
4) Container mode: Use --net=container:NAME_or_ID to specify. Using the network configuration of other containers
- Create --net=none nginx
- Create network namespace
- ## Create network namespace link
- Generate eth0 network device in nginx docker
Configure ip gateway for eth0
Recommended learning: "docker video tutorial
The above is the detailed content of The most systematic mastery of Docker core technology (summary sharing). For more information, please follow other related articles on the PHP Chinese website!

Docker and virtual machines have their own advantages and disadvantages, and the choice should be based on specific needs. 1.Docker is lightweight and fast, suitable for microservices and CI/CD, fast startup and low resource utilization. 2. Virtual machines provide high isolation and multi-operating system support, but they consume a lot of resources and slow startup.

The core concept of Docker architecture is containers and mirrors: 1. Mirrors are the blueprint of containers, including applications and their dependencies. 2. Containers are running instances of images and are created based on images. 3. The mirror consists of multiple read-only layers, and the writable layer is added when the container is running. 4. Implement resource isolation and management through Linux namespace and control groups.

Docker simplifies the construction, deployment and operation of applications through containerization technology. 1) Docker is an open source platform that uses container technology to package applications and their dependencies to ensure cross-environment consistency. 2) Mirrors and containers are the core of Docker. The mirror is the executable package of the application and the container is the running instance of the image. 3) Basic usage of Docker is like running an Nginx server, and advanced usage is like using DockerCompose to manage multi-container applications. 4) Common errors include image download failure and container startup failure, and debugging skills include viewing logs and checking ports. 5) Performance optimization and best practices include mirror optimization, resource management and security improvement.

The steps to deploy containerized applications using Kubernetes and Docker include: 1. Build a Docker image, define the application image using Dockerfile and push it to DockerHub. 2. Create Deployment and Service in Kubernetes to manage and expose applications. 3. Use HorizontalPodAutoscaler to achieve dynamic scaling. 4. Debug common problems through kubectl command. 5. Optimize performance, define resource limitations and requests, and manage configurations using Helm.

Docker is an open source platform for developing, packaging and running applications, and through containerization technology, solving the consistency of applications in different environments. 1. Build the image: Define the application environment and dependencies through the Dockerfile and build it using the dockerbuild command. 2. Run the container: Use the dockerrun command to start the container from the mirror. 3. Manage containers: manage container life cycle through dockerps, dockerstop, dockerrm and other commands.

How to build portable applications with Docker and Linux? First, use Dockerfile to containerize the application, and then manage and deploy the container in a Linux environment. 1) Write a Dockerfile and package the application and its dependencies into a mirror. 2) Build and run containers on Linux using dockerbuild and dockerrun commands. 3) Manage multi-container applications through DockerCompose and define service dependencies. 4) Optimize the image size and resource configuration, enhance security, and improve application performance and portability.

Docker and Kubernetes improve application deployment and management efficiency through container orchestration. 1.Docker builds images through Dockerfile and runs containers to ensure application consistency. 2. Kubernetes manages containers through Pod, Deployment and Service to achieve automated deployment and expansion.

Docker and Kubernetes are leaders in containerization and orchestration. Docker focuses on container lifecycle management and is suitable for small projects; Kubernetes is good at container orchestration and is suitable for large-scale production environments. The combination of the two can improve development and deployment efficiency.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SublimeText3 Linux new version
SublimeText3 Linux latest version

Dreamweaver Mac version
Visual web development tools

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

MantisBT
Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.
