nvidia-docker2.0 is a simple package, which mainly allows docker to use NVIDIA Container runtime by modifying docker's configuration file "/etc/docker/daemon.json".
The operating environment of this article: Windows 10 system, Docker version 20.10.11, Dell G3 computer.
Introduction to NVidia Docker
NVIDIA began designing NVIDIA-Docker in 2016 to facilitate containers using NVIDIA GPUs. The first generation nvidia-docker1.0 implements the encapsulation of the docker client and mounts the necessary GPU devices and libraries into the container when the container is started. However, this design method is highly coupled with the docker runtime and lacks flexibility. The existing defects are as follows:
The design is highly coupled with docker and does not support other container runtimes. Such as: LXC, CRI-O and container runtimes that may be added in the future.
Cannot make better use of other tools in the docker ecosystem. Such as: docker compose.
GPU cannot be used as a resource of the scheduling system for flexible scheduling.
Improve GPU support during container runtime. For example: automatically obtain user-level NVIDIA Driver libraries, NVIDIA kernel modules, device ordering, etc.
Based on the shortcomings described above, NVIDIA began the design of the next generation container runtime: nvidia-docker2.0.
The implementation mechanism of nvidia-docker 2.0
First briefly introduce the direct relationship between nvidia-docker 2.0, containerd, nvidia-container-runtime, libnvidia-container and runc .
The relationship between them can be related through the following picture:
nvidia-docker 2.0
nvidia-docker2.0 is a simple package that mainly allows docker to use the NVIDIA Container runtime by modifying the docker configuration file /etc/docker/daemon.json.
nvidia-container-runtime
nvidia-container-runtime is the real core part. It adds a prestart based on the original docker container runtime runc. Hook, used to call the libnvidia-container library.
libnvidia-container
libnvidia-container provides a library and a simple CLI tool that can be used to make NVIDIA GPUs used by Linux containers.
Containerd
Containerd is mainly responsible for:
Manage the life cycle of the container (from container creation to destruction )
Pull/Push container images
Storage management (manage the storage of images and container data)
Call runc to run the container
Manage the network interface and network of the container
When containerd After receiving the request, make relevant preparations. You can choose to call runc yourself or create containerd-shim and then call runc. Runc creates the container based on the OCI file. The above is the basic process of ordinary container creation.
RunC
RunC is a lightweight tool. It is used to run containers. It is only used to do one thing and one thing. Do it well. We can think of it as a command line gadget that can run containers directly without going through the docker engine. In fact, runC is a product of standardization and it creates and runs containers according to OCI standards. The OCI (Open Container Initiative) organization aims to develop an open industrial standard around container formats and runtimes.
You can create a container directly using the RunC command line and provide simple interaction capabilities.
The functions of each component and the relationship between them have been introduced above. Next, let’s describe this picture in detail:
Create a container normally The process is as follows:
docker --> dockerd --> containerd--> containerd-shim -->runc --> container-process
The docker client sends the request to create a container to dockerd. When dockerd receives the request task, it sends the request to containerd. After checking and verifying, containerd starts containerd-shim or does it by itself. Start the container process.
Create a container that uses GPU
The process of creating a GPU container is as follows:
docker--> dockerd --> containerd --> containerd-shim--> nvidia-container-runtime --> nvidia-container-runtime-hook --> libnvidia-container --> runc -- > container-process
The basic process is similar to that of a container that does not use GPU, except that The default runtime of docker is replaced by NVIDIA's own nvidia-container-runtime.
In this way, when nvidia-container-runtime creates a container, it first executes the nvidia-container-runtime-hook hook to check whether the container needs to use the GPU (judged by the environment variable NVIDIA_VISIBLE_DEVICES). If necessary, call libnvidia-container to expose the GPU to the container. Otherwise, the default runc logic is used.
Speaking of which, the general mechanism of nvidia-docker2.0 is basically clear. However, the projects involved in nvidia-container-runtime, libnvidia-container, containerd, and runc will not be introduced one by one in this article. If you are interested, you can explore and learn on your own. The addresses of these projects have been linked in the article.
Recommended learning: "Docker Video Tutorial"
The above is the detailed content of what is nvidia docker2. For more information, please follow other related articles on the PHP Chinese website!

The ways Docker can simplify development and operation and maintenance processes include: 1) providing a consistent environment to ensure that applications run consistently in different environments; 2) optimizing application deployment through Dockerfile and image building; 3) using DockerCompose to manage multiple services. Docker implements these functions through containerization technology, but during use, you need to pay attention to common problems such as image construction, container startup and network configuration, and improve performance through image optimization and resource management.

The relationship between Docker and Kubernetes is: Docker is used to package applications, and Kubernetes is used to orchestrate and manage containers. 1.Docker simplifies application packaging and distribution through container technology. 2. Kubernetes manages containers to ensure high availability and scalability. They are used in combination to improve application deployment and management efficiency.

Docker solves the problem of consistency in software running in different environments through container technology. Its development history has promoted the evolution of the cloud computing ecosystem from 2013 to the present. Docker uses Linux kernel technology to achieve process isolation and resource limitation, improving the portability of applications. In development and deployment, Docker improves resource utilization and deployment speed, supports DevOps and microservice architectures, but also faces challenges in image management, security and container orchestration.

Docker and virtual machines have their own advantages and disadvantages, and the choice should be based on specific needs. 1.Docker is lightweight and fast, suitable for microservices and CI/CD, fast startup and low resource utilization. 2. Virtual machines provide high isolation and multi-operating system support, but they consume a lot of resources and slow startup.

The core concept of Docker architecture is containers and mirrors: 1. Mirrors are the blueprint of containers, including applications and their dependencies. 2. Containers are running instances of images and are created based on images. 3. The mirror consists of multiple read-only layers, and the writable layer is added when the container is running. 4. Implement resource isolation and management through Linux namespace and control groups.

Docker simplifies the construction, deployment and operation of applications through containerization technology. 1) Docker is an open source platform that uses container technology to package applications and their dependencies to ensure cross-environment consistency. 2) Mirrors and containers are the core of Docker. The mirror is the executable package of the application and the container is the running instance of the image. 3) Basic usage of Docker is like running an Nginx server, and advanced usage is like using DockerCompose to manage multi-container applications. 4) Common errors include image download failure and container startup failure, and debugging skills include viewing logs and checking ports. 5) Performance optimization and best practices include mirror optimization, resource management and security improvement.

The steps to deploy containerized applications using Kubernetes and Docker include: 1. Build a Docker image, define the application image using Dockerfile and push it to DockerHub. 2. Create Deployment and Service in Kubernetes to manage and expose applications. 3. Use HorizontalPodAutoscaler to achieve dynamic scaling. 4. Debug common problems through kubectl command. 5. Optimize performance, define resource limitations and requests, and manage configurations using Helm.

Docker is an open source platform for developing, packaging and running applications, and through containerization technology, solving the consistency of applications in different environments. 1. Build the image: Define the application environment and dependencies through the Dockerfile and build it using the dockerbuild command. 2. Run the container: Use the dockerrun command to start the container from the mirror. 3. Manage containers: manage container life cycle through dockerps, dockerstop, dockerrm and other commands.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

SublimeText3 Chinese version
Chinese version, very easy to use

WebStorm Mac version
Useful JavaScript development tools

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver Mac version
Visual web development tools
