Support for nvidia-runtime on containerd and k3s on ARM64 machines due to docker deprecation #3054
Shaked
started this conversation in
Show and tell
Replies: 1 comment
-
I am following up on this topic after stumbling across https://k3d.io/v4.4.8/usage/guides/cuda/. So the correct setup seems to be:
Or for k3s:
Then run the following to test this:
However, I'm looking where I could learn a bit more about these settings in regards to Nvidia, as I couldn't really find an explanation of why this works. Shaked CC: |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey all,
I'm not sure if this should be here but as k3s seems to be quite used among ARM based machines, I thought I should post this both for others to be familiar with and maybe to figure if this is the best solution.
As most of the community has heard, Kubernetes 1.20, support of the Docker container engine is deprecated.
During our development cycle, we have been heavily dependent on docker as we needed to use nvidia-docker. This allowed us to use our devices - mainly Jetson Xavier - GPU within docker and later on within our k3s cluster.
Now, as kubernetes is dropping docker support and moving on with containerd, we figured that we should quickly fix our dependency in docker.
What we did:
/var/lib/rancher/k3s
to/ssd/lib/rancher/k3s
and the main reason was because we want containerd to save the downloaded images on an external SSD drive instead of the main harddrive which is quite low on space. (Before we did the same with/var/lib/docker
)/etc/rancher/k3s/registries.yaml
with our own settings. Before we did this in/etc/docker/daemon.json
.--docker
argument./ssd/var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl
:We got nvidia-runtime's configuration from @klueska. Details at NVIDIA/nvidia-docker#1468 (comment)
I hope this helps others as it helped us,
Shaked
Beta Was this translation helpful? Give feedback.
All reactions