Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrading gpu-operator on Rancher RKE2 results in nvidia-container-toolkit-daemonset failing to initialize #1099

Closed
nikito opened this issue Nov 1, 2024 · 2 comments

Comments

@nikito
Copy link

nikito commented Nov 1, 2024

When upgrading to latest gpu-operator v24.9.0, when the nvidia-container-toolkit-daemonset fails to initialize with the following error:
level=error msg="error running nvidia-toolkit: unable to determine runtime options: unable to load containerd config: failed to load config: failed to run command chroot [/host containerd config dump]: exit status 127"

If I rollback to v24.6.2 everything initializes correctly.

@nikito
Copy link
Author

nikito commented Nov 1, 2024

Closing issue, not sure what I did but I uninstalled the operator, then reinstalled 24.9.0 from scratch and everything appears to be working now.

@nikito nikito closed this as completed Nov 1, 2024
@jwindhager
Copy link

I can confirm the issue. Running v24.9.0 in K3S/Flux on Ubuntu 24.04 LTS with driver 535. Rollback to v24.6.2 fixes the issue. Unlike @nikito, I did not manage to upgrade to v24.9.0 after the rollback (tried uninstalling & reinstalling from scratch). Staying with v24.6.2 for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants