Help mounting nvoptix.bin in containers #4269
Unanswered
emaincourt
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
After trying to upgrade our Bottlerocket AMIs, we realized OptiX was broken. The reason behind that is that, I supposed in recent OptiX versions,
nvoptix.bin
is required to load the denoiser weights according to the error logs.I updated
packages/kmod-6.1-nvidia/kmod-6.1-nvidia.spec
to install the file when building the package. See the commit here. I'm pretty new to Bottlerocket so I might actually do it wrong. Anyway the file is properly available when I boot the host with my custom AMI at path/usr/share/nvidia/nvoptix.bin
which is the proper path for the nvidia-container-toolkit to load it.However, when I start a container with the right environment variables (eg.
NVIDIA_DRIVER_CAPABILITIES=compute,utility,graphics,display
), the file does not get mounted into the container. I tried the same thing with an AL2023 AMI which works as expected.nvoptix.bin
is located at the same path, and properly gets mounted into the container.The
nvidia-container-runtime
configuration looks quite different on the AL2023 AMIs and the Bottlerocket ones:However I do not feel like this would be the issue. If I'm right, the
nvidia-container-runtime-hook
binary is in charge of mounting the files and the configuration file from AL2023 does not seem to set any parameter that would make it different.There is a specific configuration property that can be set to enable debugging logs for the nvidia container runtime:
Enabling that setting actually logs the files that are being mounted on AL2023. On Bottlerocket, it does nothing. I'm wondering maybe the binary is not allowed to write that path but I might be wrong.
Basically my questions would be:
nvoptix.bin
?Thanks in advance for any help
Beta Was this translation helpful? Give feedback.
All reactions