You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In some cases, it may be necessary to run MRI simulations across multiple compute nodes if the problem is too large to fit in memory for a single computer. Similar to #355, we could add an example dividing the phantom object into parts and using Distributed.jl to distribute the simulation across multiple workers. I think ClusterManagers.jl can wrap Distributed.jl to operate on different backends (for example, MPIClusterManagers.jl for an MPI backend).
If this isn't feasible for some reason, then support for multi-node computing would need to be added within the package. The approach used by DifferentialEquations.jl, LinearSolve.jl, and other packages is to use generic array operations for inner functions that work for distributed array types, which is more or less how KomaMRI's current GPU simulation works. So a possible approach is to convert all arrays that are part of the simulation structs to whatever distributed array type (DArray, MPIArray, etc.) we wanted to support using Adapt.jl and Functors.jl , although this is probably easier said than done.
The text was updated successfully, but these errors were encountered:
Feature Request
In some cases, it may be necessary to run MRI simulations across multiple compute nodes if the problem is too large to fit in memory for a single computer. Similar to #355, we could add an example dividing the phantom object into parts and using Distributed.jl to distribute the simulation across multiple workers. I think ClusterManagers.jl can wrap Distributed.jl to operate on different backends (for example, MPIClusterManagers.jl for an MPI backend).
If this isn't feasible for some reason, then support for multi-node computing would need to be added within the package. The approach used by DifferentialEquations.jl, LinearSolve.jl, and other packages is to use generic array operations for inner functions that work for distributed array types, which is more or less how KomaMRI's current GPU simulation works. So a possible approach is to convert all arrays that are part of the simulation structs to whatever distributed array type (DArray, MPIArray, etc.) we wanted to support using Adapt.jl and Functors.jl , although this is probably easier said than done.
The text was updated successfully, but these errors were encountered: