You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In some cases, it may be beneficial to run MRI simulations on multiple GPUs if the problem is too large for single-GPU memory. KomaMRI does not have built-in support for this, but due to the independent spin property, it would not be too hard for a programmer to manually split the simulation into parts, run each part on a different GPU, then add the results at the end.
Based on this section of the CUDA.jl documentation, here is one example I think would work without needing to update code within the package (NOTE: I have not tested this!):
using Distributed, CUDA
addprocs(length(devices()))
@everywhereusing CUDA, KomaMRI
sys =Scanner()
obj =Phantom()
parts =kfoldperm(length(obj), length(devices))
seq =read_seq("SequenceFileName.seq")
signal_arr = []
asyncmap((zip(workers(), devices()))) do (p, d)
remotecall_wait(p) dodevice!(d)
push!(signal_arr, simulate(@view(obj[p]), seq, sys))
endend
signal =reduce(.+, signal_arr)
If this does work, it would be helpful to add to the examples folder of the repository and package documentation.
The text was updated successfully, but these errors were encountered:
Feature Request
In some cases, it may be beneficial to run MRI simulations on multiple GPUs if the problem is too large for single-GPU memory. KomaMRI does not have built-in support for this, but due to the independent spin property, it would not be too hard for a programmer to manually split the simulation into parts, run each part on a different GPU, then add the results at the end.
Based on this section of the CUDA.jl documentation, here is one example I think would work without needing to update code within the package (NOTE: I have not tested this!):
If this does work, it would be helpful to add to the examples folder of the repository and package documentation.
The text was updated successfully, but these errors were encountered: