-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Determinant of Identity Matrix on CUDA #1143
Comments
Hi @k-bingcai , I was not able to reproduce the issue. It might be an uncompatibility between the cuda version and torch. |
Hi @dfalbel, Thanks for getting back! Here's my
The CUDA version is
Hope that clarifies! |
I'm pretty sure the problem is caused by a ABI compatibility issue between CUDA11 (used by torch) and CUDA12 that you have installed on that environment. I suggest you to install torch using the pre-built binaries, that include a compatible CUDA and CuDNN versions. You can do so by running somehting like:
|
Hello, Thanks for the quick response! I'll try the proposed solution. I have a rather naive question though: will the pre-built binaries work even though CUDA 12.2 is installed on the system? The documentation seems to suggest so (i.e. I am asking because the GPU is on a university-wide cluster and I cannot change the CUDA driver version... |
With the pre-built binaries the globally installed cuda version doesn't matter, as the correct version is shipped within the package. That's actually a similar approach to what pytorch does. |
Hello,
I noticed that I cannot compute the determinant of an identity matrix using torch in R.
i.e.
torch_eye(3)$cuda()$det()
It gives me this error:
I'm not sure what to make out of it? I tried computing the same determinant in pyTorch and it worked fine. Is this a bug or is this something to be expected?
The text was updated successfully, but these errors were encountered: