-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extend GPU support to Metal, ROCm, and oneAPI backends #405
Conversation
…ckageExtensionCompat dependency
…ing raw acquisition data
… to fix Metal issue
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #405 +/- ##
==========================================
- Coverage 89.47% 89.32% -0.16%
==========================================
Files 43 44 +1
Lines 2728 2736 +8
==========================================
+ Hits 2441 2444 +3
- Misses 287 292 +5
Flags with carried forward coverage won't be shown. Click here to find out more.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perfect! add print_devices
in docs/src/reference/3-koma-core.md
so the documentation doesn't fail.
Maybe something like this (I renamed the subsection)
## GPU helper functions
```@docs
print_devices
gpu
cpu
f32
f64```
Should we add a warning to KomaMetalExt.__init__
saying that performance could be affected by JuliaGPU/Metal.jl#348?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amazing 😃 !! well done!
This pull request adds support for GPU backends Metal, ROCm, and oneAPI with the corresponding Julia packages (Metal.jl, AMDGPU.jl, oneAPI.jl). CUDA is still supported but is no longer an explicit dependency, so users will now need to add 'using CUDA' to their code or they will see a message informing them that no GPU backends are loaded. I've run the KomaMRICore tests with CUDA and Metal and ensured that they pass, but for AMD and oneAPI we will need to wait until
#147 to confirm that this works as expected.
I haven't made any documentation updates yet, so let me know if that needs to be done before merging. The last thing to note is with the ArbitraryMotion struct, there was a custom adapt_storage function for adapting to a CUDA device that I wasn't sure whether it was necessary, since I think Adapt.jl and Functors.jl should be able to adapt the fields correctly and the ArbitraryMotion GPU test still passed with my changes in place. Nevertheless, it's possible I will need to add that function back, and define adapt_storage functions for converting ArbitraryMotion to the other backends.