-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dynamic loading #11
Comments
Dynamic loading by itself won't help, because some CUDA versions break binary compatibility (see termoshtt/accel#58 and the follow-ups in #4 and #12). In contrast, dynamic loading requires binary compatibility. Do you have any pointers to other projects that implement dynamic loading for CUDA? |
There is ongoing work in TensorFlow for example. In terms of version compatibility I don't see much differences here, you still can check version of |
Sorry, I misunderstood your first post. This is orthogonal to the version incompatibility issue. Looking through tensorflow/tensorflow@f092c9d, what they're doing is creating a shim for each and every CUDA function. This makes it possible to switch between CUDA and the shim at runtime. Not so sure if cuda-sys is the right place to do these kind of tricks, as they introduce some new trade-offs and the purpose of this crate is to provide bare-bones CUDA bindings. Off the top of my head:
Perhaps someone with more experience than me could pitch in? |
I agree that this seems out of scope for cuda-sys. What you could do, if you'd like to take the shim approach, is create a separate shim library that, for all intents and purposes, looks like the CUDA library, but does the necessary indirection described here. |
It can also make incompatibility issues easier, since you can choose function signature at runtime.
As you can see static pointers to library functions set, so no need for branch at each function call. |
Hello! Do you have plans to add support for dynamic loading cuda library at runtime?
Something like rust-dlopen for example.
With current approach this crate will cause crash if CUDA libraries is not found,
instead of raising error to user (so it will be possible to fallback to cpu/opencl code).
Also, with dynamic loading it will be possible to find CUDA libraries at runtime,
so it won't be necessary to do
export DYLD_LIBRARY_PATH=/usr/local/cuda/lib
in MacOS (and probably LD_LIBRARY_PATH in Linux).
The text was updated successfully, but these errors were encountered: