Home  >  Article  >  Operation and Maintenance  >  Does docker support gpu?

Does docker support gpu?

尚
Original
2020-04-02 17:31:092874browse

Does docker support gpu?

Docker supports GPU, and docker can use GPU through nvidia-docker2. Configure the runtime to use nvidia in the daemon.json file. After starting the container, run nvidia-smi to see all GPUs.

Introduction to the method of mounting GPU with docker:

Using nvidia-docker2

In short, using nvidia-docker2, you can use the GPU effortlessly, just You need to configure the runtime. After starting the container using nvidia

cat /etc/docker/daemon.json
{
    "default-runtime": "nvidia",
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    },
    "exec-opts": ["native.cgroupdriver=systemd"]
}

, you can see all GPU cards by running nvidia-smi:

[root@localhost] docker run -it 98b41a1e975d bash
root@6db1dd28459d:/notebooks# nvidia-smi

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.79       Driver Version: 410.79       CUDA Version: 10.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla V100-SXM2...  On   | 00000000:8A:00.0 Off |                    0 |
| N/A   40C    P0    57W / 300W |   4053MiB / 16130MiB |      4%      Default |
+-------------------------------+----------------------+----------------------+
|   1  Tesla V100-SXM2...  On   | 00000000:8B:00.0 Off |                    0 |
| N/A   38C    P0    40W / 300W |      0MiB / 16130MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   2  Tesla V100-SXM2...  On   | 00000000:8C:00.0 Off |                    0 |
| N/A   42C    P0    46W / 300W |      0MiB / 16130MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   3  Tesla V100-SXM2...  On   | 00000000:8D:00.0 Off |                    0 |
| N/A   39C    P0    40W / 300W |      0MiB / 16130MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   4  Tesla V100-SXM2...  On   | 00000000:B3:00.0 Off |                    0 |
| N/A   39C    P0    42W / 300W |      0MiB / 16130MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   5  Tesla V100-SXM2...  On   | 00000000:B4:00.0 Off |                    0 |
| N/A   41C    P0    57W / 300W |   7279MiB / 16130MiB |      4%      Default |
+-------------------------------+----------------------+----------------------+
|   6  Tesla V100-SXM2...  On   | 00000000:B5:00.0 Off |                    0 |
| N/A   40C    P0    45W / 300W |      0MiB / 16130MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   7  Tesla V100-SXM2...  On   | 00000000:B6:00.0 Off |                    0 |
| N/A   41C    P0    44W / 300W |      0MiB / 16130MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

You can add some libraries through NVIDIA_DRIVER_CAPABILITIES. Through NVIDIA_VISIBLE_DEVICES you can only use certain GPU cards

[root@localhost cuda-9.0]# docker run -it  --env NVIDIA_DRIVER_CAPABILITIES="compute,utility"  --env NVIDIA_VISIBLE_DEVICES=0,1 98b41a1e975d bash
root@97bf127ff83a:/notebooks# nvidia-smi
Tue Oct 15 09:29:45 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.79       Driver Version: 410.79       CUDA Version: 10.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla V100-SXM2...  On   | 00000000:8A:00.0 Off |                    0 |
| N/A   39C    P0    57W / 300W |   4053MiB / 16130MiB |      3%      Default |
+-------------------------------+----------------------+----------------------+
|   1  Tesla V100-SXM2...  On   | 00000000:8B:00.0 Off |                    0 |
| N/A   37C    P0    40W / 300W |      0MiB / 16130MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

For more related tutorials, please pay attention to the docker tutorial column on the PHP Chinese website.

The above is the detailed content of Does docker support gpu?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn