This repository has been archived by the owner on Jun 6, 2024. It is now read-only.
Support nvidia-container-runtime for gpu isolation #2352
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Support nvidia-container-runtime for gpu isolation:
Support nvidia-container-runtime for gpu isolation
Set
NVIDIA_VISIBLE_DEVICES
tovoid
to avoid conflict with runc, reference:https://github.com/NVIDIA/nvidia-container-runtime#nvidia_visible_devices.
Works for w/ and w/o
--runtime=nvidia
.Set default runtime to runc
Set
--runtime
torunc
explicitly to overwrite default runtime.Drop
MKNOD
capabilityDrop
MKNOD
capability when starting container.nvidia-smi
will make all gpu devices show up under /dev, drop mknod capability to avoid this.Reference: Chaotic device name show in container`s /dev/ path and with GPU isolation NVIDIA/nvidia-docker#170