首頁  >  文章  >  運維  >  docker支援gpu麼

docker支援gpu麼

尚
原創
2020-04-02 17:31:092818瀏覽

docker支援gpu麼

docker支援gpu,docker可以透過nvidia-docker2來使用GPU。在daemon.json檔案中配置runtime使用nvidia,啟動容器後執行nvidia-smi就能看到所有的GPU。

docker掛載GPU的方法介紹:

使用nvidia-docker2

簡言之,使用 nvidia-docker2,可以不費吹灰之力就能使用到GPU,僅僅需要設定runtime 使用 nvidia

cat /etc/docker/daemon.json
{
    "default-runtime": "nvidia",
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    },
    "exec-opts": ["native.cgroupdriver=systemd"]
}

啟動容器之後,執行nvidia-smi 能看到所有的GPU 卡:

[root@localhost] docker run -it 98b41a1e975d bash
root@6db1dd28459d:/notebooks# nvidia-smi

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.79       Driver Version: 410.79       CUDA Version: 10.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla V100-SXM2...  On   | 00000000:8A:00.0 Off |                    0 |
| N/A   40C    P0    57W / 300W |   4053MiB / 16130MiB |      4%      Default |
+-------------------------------+----------------------+----------------------+
|   1  Tesla V100-SXM2...  On   | 00000000:8B:00.0 Off |                    0 |
| N/A   38C    P0    40W / 300W |      0MiB / 16130MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   2  Tesla V100-SXM2...  On   | 00000000:8C:00.0 Off |                    0 |
| N/A   42C    P0    46W / 300W |      0MiB / 16130MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   3  Tesla V100-SXM2...  On   | 00000000:8D:00.0 Off |                    0 |
| N/A   39C    P0    40W / 300W |      0MiB / 16130MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   4  Tesla V100-SXM2...  On   | 00000000:B3:00.0 Off |                    0 |
| N/A   39C    P0    42W / 300W |      0MiB / 16130MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   5  Tesla V100-SXM2...  On   | 00000000:B4:00.0 Off |                    0 |
| N/A   41C    P0    57W / 300W |   7279MiB / 16130MiB |      4%      Default |
+-------------------------------+----------------------+----------------------+
|   6  Tesla V100-SXM2...  On   | 00000000:B5:00.0 Off |                    0 |
| N/A   40C    P0    45W / 300W |      0MiB / 16130MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   7  Tesla V100-SXM2...  On   | 00000000:B6:00.0 Off |                    0 |
| N/A   41C    P0    44W / 300W |      0MiB / 16130MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

透過 NVIDIA_DRIVER_CAPABILITIES 可以加入部分的library。透過 NVIDIA_VISIBLE_DEVICES 可以只使用某些 GPU 卡

[root@localhost cuda-9.0]# docker run -it  --env NVIDIA_DRIVER_CAPABILITIES="compute,utility"  --env NVIDIA_VISIBLE_DEVICES=0,1 98b41a1e975d bash
root@97bf127ff83a:/notebooks# nvidia-smi
Tue Oct 15 09:29:45 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.79       Driver Version: 410.79       CUDA Version: 10.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla V100-SXM2...  On   | 00000000:8A:00.0 Off |                    0 |
| N/A   39C    P0    57W / 300W |   4053MiB / 16130MiB |      3%      Default |
+-------------------------------+----------------------+----------------------+
|   1  Tesla V100-SXM2...  On   | 00000000:8B:00.0 Off |                    0 |
| N/A   37C    P0    40W / 300W |      0MiB / 16130MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

更多相關教程,請關注PHP中文網docker教學專欄。

以上是docker支援gpu麼的詳細內容。更多資訊請關注PHP中文網其他相關文章!

陳述:
本文內容由網友自願投稿,版權歸原作者所有。本站不承擔相應的法律責任。如發現涉嫌抄襲或侵權的內容,請聯絡admin@php.cn