From 02e66c1f43c5d00d6cd23b6652c0c4b3e989dc6c Mon Sep 17 00:00:00 2001 From: rkierulf Date: Fri, 23 Aug 2024 13:59:15 -0500 Subject: [PATCH 01/11] Add GPU explanation section --- docs/src/explanation/4-gpu-explanation.md | 31 +++++++++++++++++++++++ 1 file changed, 31 insertions(+) create mode 100644 docs/src/explanation/4-gpu-explanation.md diff --git a/docs/src/explanation/4-gpu-explanation.md b/docs/src/explanation/4-gpu-explanation.md new file mode 100644 index 000000000..4edb33592 --- /dev/null +++ b/docs/src/explanation/4-gpu-explanation.md @@ -0,0 +1,31 @@ +# GPU Parallelization + +KomaMRI uses a vendor agnostic approach to GPU parallelization in order to support multiple GPU backends. Currently, the following backends are supported: + +* CUDA.jl (Nvidia) +* Metal.jl (Apple) +* AMDGPU.jl (AMD) +* oneAPI.jl (Intel) + +## Choosing a GPU Backend + +To determine which backend to use, KomaMRI uses [package extensions](https://pkgdocs.julialang.org/v1/creating-packages/#Conditional-loading-of-code-in-packages-(Extensions)) (introduced in Julia 1.9) to avoid having the packagaes for each GPU backend as explicit dependencies. This means that the user is responsible for loading the backend package (e.g. `using CUDA`) at the beginning of their code, or prior to calling KomaUI(), otherwise, Koma will default back to the CPU. Once this is done, no further action is recquired! The simulation objects will automatically be moved to the GPU and back once the simulation is finished. When the simulation is run a message will be shown with either the GPU device being used or the number of CPU threads if running on the CPU. + +## How Objects are moved to the GPU + +KomaMRI has a general purpose function, `gpu`, to move data from the CPU to the GPU. The `gpu` function implementation calls a separate `gpu` function with a backend parameter of type `<:KernelAbstractions.GPU` for the backend it is using. This function then calls the `fmap` function from package `Functors.jl` to recursively call `adapt` from package `Adapt.jl` on each field of the object being transferred. This is similar to how many other Julia packages, such as `Flux.jl`, transfer data to the GPU. However, an important difference is that KomaMRI adapts directly to the `KernelAbstractions.Backend` type in order to use the `adapt_storage` functions defined in each backend package, rather than defining custom adapters, resulting in an implementation with fewer lines of code. + +## Inside the Simulation + +KomaMRI has three different simulation methods, all of which can run on the GPU: + +* `Bloch` +* `BlochSimple` +* `BlochDict` + +Of the three methods, `Bloch` is the most optimized, and has separate implementations specialized for the CPU and GPU. `BlochSimple` is equivalent to `Bloch` in the operations it performs, but less optimized and easier to understand. `BlochDict` can be understood as an extension of `BlochSimple` that outputs a more complete signal. + +`BlochSimple` and `Bloch` take slightly different approaches to GPU parallelization. `BlochSimple` exclusively uses array broadcasting, with parallelization on the arrays being done implicitly by the GPU compiler. In constrast, `Bloch` uses explicit GPU kernels where advantageous, using package `KernelAbstractions.jl`. Readers curious about the performance improvements between `Bloch` and `BlochSimple` may want to look at the following pull reqeusts: + +* [(459) Optimize run_spin_precession! for GPU](https://github.com/JuliaHealth/KomaMRI.jl/pull/459) +* [(462) Optimize run_spin_excitation! for GPU](https://github.com/JuliaHealth/KomaMRI.jl/pull/462) From 64dbc8121fa7735431e3d4e50aa7ea6851f4ed46 Mon Sep 17 00:00:00 2001 From: rkierulf Date: Fri, 23 Aug 2024 16:12:26 -0500 Subject: [PATCH 02/11] Update 4-gpu-explanation.md --- docs/src/explanation/4-gpu-explanation.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/src/explanation/4-gpu-explanation.md b/docs/src/explanation/4-gpu-explanation.md index 4edb33592..190789ae8 100644 --- a/docs/src/explanation/4-gpu-explanation.md +++ b/docs/src/explanation/4-gpu-explanation.md @@ -25,7 +25,7 @@ KomaMRI has three different simulation methods, all of which can run on the GPU: Of the three methods, `Bloch` is the most optimized, and has separate implementations specialized for the CPU and GPU. `BlochSimple` is equivalent to `Bloch` in the operations it performs, but less optimized and easier to understand. `BlochDict` can be understood as an extension of `BlochSimple` that outputs a more complete signal. -`BlochSimple` and `Bloch` take slightly different approaches to GPU parallelization. `BlochSimple` exclusively uses array broadcasting, with parallelization on the arrays being done implicitly by the GPU compiler. In constrast, `Bloch` uses explicit GPU kernels where advantageous, using package `KernelAbstractions.jl`. Readers curious about the performance improvements between `Bloch` and `BlochSimple` may want to look at the following pull reqeusts: +`BlochSimple` and `Bloch` take slightly different approaches to GPU parallelization. `BlochSimple` exclusively uses array broadcasting, with parallelization on the arrays being done implicitly by the GPU compiler. In constrast, `Bloch` uses explicit GPU kernels where advantageous, using package `KernelAbstractions.jl`. Readers curious about the performance improvements between `Bloch` and `BlochSimple` may be interested to look at the following pull reqeusts: * [(459) Optimize run_spin_precession! for GPU](https://github.com/JuliaHealth/KomaMRI.jl/pull/459) * [(462) Optimize run_spin_excitation! for GPU](https://github.com/JuliaHealth/KomaMRI.jl/pull/462) From 632f6632b536419fd0120fa5501d299f8410d897 Mon Sep 17 00:00:00 2001 From: rkierulf Date: Fri, 23 Aug 2024 16:20:59 -0500 Subject: [PATCH 03/11] Fix typo --- docs/src/explanation/4-gpu-explanation.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/src/explanation/4-gpu-explanation.md b/docs/src/explanation/4-gpu-explanation.md index 190789ae8..57942a095 100644 --- a/docs/src/explanation/4-gpu-explanation.md +++ b/docs/src/explanation/4-gpu-explanation.md @@ -9,7 +9,7 @@ KomaMRI uses a vendor agnostic approach to GPU parallelization in order to suppo ## Choosing a GPU Backend -To determine which backend to use, KomaMRI uses [package extensions](https://pkgdocs.julialang.org/v1/creating-packages/#Conditional-loading-of-code-in-packages-(Extensions)) (introduced in Julia 1.9) to avoid having the packagaes for each GPU backend as explicit dependencies. This means that the user is responsible for loading the backend package (e.g. `using CUDA`) at the beginning of their code, or prior to calling KomaUI(), otherwise, Koma will default back to the CPU. Once this is done, no further action is recquired! The simulation objects will automatically be moved to the GPU and back once the simulation is finished. When the simulation is run a message will be shown with either the GPU device being used or the number of CPU threads if running on the CPU. +To determine which backend to use, KomaMRI uses [package extensions](https://pkgdocs.julialang.org/v1/creating-packages/#Conditional-loading-of-code-in-packages-(Extensions)) (introduced in Julia 1.9) to avoid having the packages for each GPU backend as explicit dependencies. This means that the user is responsible for loading the backend package (e.g. `using CUDA`) at the beginning of their code, or prior to calling KomaUI(), otherwise, Koma will default back to the CPU. Once this is done, no further action is recquired! The simulation objects will automatically be moved to the GPU and back once the simulation is finished. When the simulation is run a message will be shown with either the GPU device being used or the number of CPU threads if running on the CPU. ## How Objects are moved to the GPU From 91886c7c0742121c7c0d92a4cff321d8db040bdf Mon Sep 17 00:00:00 2001 From: rkierulf Date: Fri, 23 Aug 2024 18:26:06 -0500 Subject: [PATCH 04/11] Update 4-gpu-explanation.md --- docs/src/explanation/4-gpu-explanation.md | 56 ++++++++++++++++++++--- 1 file changed, 49 insertions(+), 7 deletions(-) diff --git a/docs/src/explanation/4-gpu-explanation.md b/docs/src/explanation/4-gpu-explanation.md index 57942a095..9c37dd6ab 100644 --- a/docs/src/explanation/4-gpu-explanation.md +++ b/docs/src/explanation/4-gpu-explanation.md @@ -9,23 +9,65 @@ KomaMRI uses a vendor agnostic approach to GPU parallelization in order to suppo ## Choosing a GPU Backend -To determine which backend to use, KomaMRI uses [package extensions](https://pkgdocs.julialang.org/v1/creating-packages/#Conditional-loading-of-code-in-packages-(Extensions)) (introduced in Julia 1.9) to avoid having the packages for each GPU backend as explicit dependencies. This means that the user is responsible for loading the backend package (e.g. `using CUDA`) at the beginning of their code, or prior to calling KomaUI(), otherwise, Koma will default back to the CPU. Once this is done, no further action is recquired! The simulation objects will automatically be moved to the GPU and back once the simulation is finished. When the simulation is run a message will be shown with either the GPU device being used or the number of CPU threads if running on the CPU. +To determine which backend to use, KomaMRI uses [package extensions](https://pkgdocs.julialang.org/v1/creating-packages/#Conditional-loading-of-code-in-packages-(Extensions)) (introduced in Julia 1.9) to avoid having the packages for each GPU backend as explicit dependencies. This means that the user is responsible for loading the backend package (e.g. `using CUDA`) at the beginning of their code, or prior to calling KomaUI(), otherwise, Koma will default back to the CPU: + +```julia +using KomaMRI +using CUDA # loading CUDA will load KomaMRICoreCUDAExt, selecting the backend +``` + +Once this is done, no further action is recquired! The simulation objects will automatically be moved to the GPU and back once the simulation is finished. When the simulation is run a message will be shown with either the GPU device being used or the number of CPU threads if running on the CPU. + +Of course, it is still possible to move objects to the GPU manually, and control precision using the f32 and f64 functions: + +```julia +x = rand(100) +x |> f32 |> gpu # Float32 CuArray +``` + +To change the precision level used in the simulation, the `sim_params["precision"]` parameter can be set to either `f32` or `f64` (Note that for most GPUs, Float32 operations are considerably faster compared with Float64). In addition, the `sim_params["gpu"]` option can be set to true or false to enable / disable the gpu functionality (if set to true, the backend package will still need to be loaded beforehand): + +```julia +using KomaMRI +sys = Scanner +obj = brain_phantom2D() +seq = PulseDesigner.EPI_example() + +#Simulate on the GPU using 32-bit floating point values +sim_params = Dict{String,Any}( + "Nblocks" => 20, + "gpu" => false, + "precision" => "f64" + "sim_method" => Bloch(), +) +simulate(obj, seq, sys; sim_params) +``` + ## How Objects are moved to the GPU -KomaMRI has a general purpose function, `gpu`, to move data from the CPU to the GPU. The `gpu` function implementation calls a separate `gpu` function with a backend parameter of type `<:KernelAbstractions.GPU` for the backend it is using. This function then calls the `fmap` function from package `Functors.jl` to recursively call `adapt` from package `Adapt.jl` on each field of the object being transferred. This is similar to how many other Julia packages, such as `Flux.jl`, transfer data to the GPU. However, an important difference is that KomaMRI adapts directly to the `KernelAbstractions.Backend` type in order to use the `adapt_storage` functions defined in each backend package, rather than defining custom adapters, resulting in an implementation with fewer lines of code. +Koma's `gpu` function implementation calls a separate `gpu` function with a backend parameter of type `<:KernelAbstractions.GPU` for the backend it is using. This function then calls the `fmap` function from package `Functors.jl` to recursively call `adapt` from package `Adapt.jl` on each field of the object being transferred. This is similar to how many other Julia packages, such as `Flux.jl`, transfer data to the GPU. However, an important difference is that KomaMRI adapts directly to the `KernelAbstractions.Backend` type in order to use the `adapt_storage` functions defined in each backend package, rather than defining custom adapters, resulting in an implementation with fewer lines of code. ## Inside the Simulation KomaMRI has three different simulation methods, all of which can run on the GPU: -* `Bloch` -* `BlochSimple` -* `BlochDict` +* `BlochSimple`: [BlochSimple.jl](https://github.com/JuliaHealth/KomaMRI.jl/blob/master/KomaMRICore/src/simulation/SimMethods/BlochSimple/BlochSimple.jl) +* `BlochDict`: [BlochDict.jl](https://github.com/JuliaHealth/KomaMRI.jl/blob/master/KomaMRICore/src/simulation/SimMethods/BlochDict/BlochDict.jl) +* `Bloch`: [BlochCPU.jl](https://github.com/JuliaHealth/KomaMRI.jl/blob/master/KomaMRICore/src/simulation/SimMethods/Bloch/BlochCPU.jl) / [BlochGPU.jl](https://github.com/JuliaHealth/KomaMRI.jl/blob/master/KomaMRICore/src/simulation/SimMethods/Bloch/BlochGPU.jl) + +`BlochSimple` is the simplest method and prioritizes readability. + +`BlochDict` can be understood as an extension to `BlochSimple` that outputs a more detailed signal. + +`Bloch` is equivalent to `BlochSimple` in the operations it performs, but is much faster since it has been optimized both for the CPU and GPU. The CPU implementation prioritizes conserving memory, and makes extensive use of pre-allocation for the simulation arrays. Unlike the GPU implementation, it does not allocate a matrix of size `Number of Spins x Number of Time Points` in each block, and instead uses a for loop to step through time. + +In contrast, the GPU implementation divides work among as many threads as possible at the beginning of the `run_spin_precession!` and `run_spin_excitation!` functions. For the CPU implementation, this would not be beneficial since there are far less CPU threads available compared with the GPU. Preallocation is also used via the same `prealloc` function used in `BlochCPU.jl`, where a struct of arrays is allocated at the beginning of the simulation that can be re-used in each simulation block. In addition, a `precalc` function is called before moving the simulation objects to the GPU to do certain calculations that are faster on the CPU beforehand. -Of the three methods, `Bloch` is the most optimized, and has separate implementations specialized for the CPU and GPU. `BlochSimple` is equivalent to `Bloch` in the operations it performs, but less optimized and easier to understand. `BlochDict` can be understood as an extension of `BlochSimple` that outputs a more complete signal. +Compared with `BlochSimple`, which only uses array broadcasting for parallelization, `Bloch` also uses kernel-based methods in its `run_spin_excitation!` function for operations which need to be done sequentially. The [kernel implementation](https://github.com/JuliaHealth/KomaMRI.jl/blob/master/KomaMRICore/src/simulation/SimMethods/Bloch/KernelFunctions.jl) uses shared memory to store the necessary arrays for applying the spin excitation for fast memory access, and separates the complex arrays into real and imaginary components to avoid bank conflicts. -`BlochSimple` and `Bloch` take slightly different approaches to GPU parallelization. `BlochSimple` exclusively uses array broadcasting, with parallelization on the arrays being done implicitly by the GPU compiler. In constrast, `Bloch` uses explicit GPU kernels where advantageous, using package `KernelAbstractions.jl`. Readers curious about the performance improvements between `Bloch` and `BlochSimple` may be interested to look at the following pull reqeusts: +The performance differences between Bloch and BlochSimple are illustrated on the KomaMRI [benchmarks page](https://juliahealth.org/KomaMRI.jl/benchmarks/). The first data point is from when `Bloch` was what is now `BlochSimple`, before a more optimized implementation was added. The following three pull requests are primarily responsible for the performance improvements made in between then and now: +* [(443) Optimize run_spin_precession! and run_spin_excitation! for CPU](https://github.com/JuliaHealth/KomaMRI.jl/pull/443) * [(459) Optimize run_spin_precession! for GPU](https://github.com/JuliaHealth/KomaMRI.jl/pull/459) * [(462) Optimize run_spin_excitation! for GPU](https://github.com/JuliaHealth/KomaMRI.jl/pull/462) From 9358c19ab182fb1703ef8107522be1d82cde9469 Mon Sep 17 00:00:00 2001 From: rkierulf Date: Fri, 23 Aug 2024 18:31:16 -0500 Subject: [PATCH 05/11] More updates --- docs/src/explanation/4-gpu-explanation.md | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/docs/src/explanation/4-gpu-explanation.md b/docs/src/explanation/4-gpu-explanation.md index 9c37dd6ab..ad51ab2b5 100644 --- a/docs/src/explanation/4-gpu-explanation.md +++ b/docs/src/explanation/4-gpu-explanation.md @@ -16,7 +16,7 @@ using KomaMRI using CUDA # loading CUDA will load KomaMRICoreCUDAExt, selecting the backend ``` -Once this is done, no further action is recquired! The simulation objects will automatically be moved to the GPU and back once the simulation is finished. When the simulation is run a message will be shown with either the GPU device being used or the number of CPU threads if running on the CPU. +Once this is done, no further action is needed! The simulation objects will automatically be moved to the GPU and back once the simulation is finished. When the simulation is run a message will be shown with either the GPU device being used or the number of CPU threads if running on the CPU. Of course, it is still possible to move objects to the GPU manually, and control precision using the f32 and f64 functions: @@ -25,10 +25,11 @@ x = rand(100) x |> f32 |> gpu # Float32 CuArray ``` -To change the precision level used in the simulation, the `sim_params["precision"]` parameter can be set to either `f32` or `f64` (Note that for most GPUs, Float32 operations are considerably faster compared with Float64). In addition, the `sim_params["gpu"]` option can be set to true or false to enable / disable the gpu functionality (if set to true, the backend package will still need to be loaded beforehand): +To change the precision level used for the entire simulation, the `sim_params["precision"]` parameter can be set to either `f32` or `f64` (Note that for most GPUs, Float32 operations are considerably faster compared with Float64). In addition, the `sim_params["gpu"]` option can be set to true or false to enable / disable the gpu functionality (if set to true, the backend package will still need to be loaded beforehand): ```julia using KomaMRI +using CUDA sys = Scanner obj = brain_phantom2D() seq = PulseDesigner.EPI_example() @@ -36,8 +37,8 @@ seq = PulseDesigner.EPI_example() #Simulate on the GPU using 32-bit floating point values sim_params = Dict{String,Any}( "Nblocks" => 20, - "gpu" => false, - "precision" => "f64" + "gpu" => true, + "precision" => "f32" "sim_method" => Bloch(), ) simulate(obj, seq, sys; sim_params) @@ -60,13 +61,13 @@ KomaMRI has three different simulation methods, all of which can run on the GPU: `BlochDict` can be understood as an extension to `BlochSimple` that outputs a more detailed signal. -`Bloch` is equivalent to `BlochSimple` in the operations it performs, but is much faster since it has been optimized both for the CPU and GPU. The CPU implementation prioritizes conserving memory, and makes extensive use of pre-allocation for the simulation arrays. Unlike the GPU implementation, it does not allocate a matrix of size `Number of Spins x Number of Time Points` in each block, and instead uses a for loop to step through time. +`Bloch` is equivalent to `BlochSimple` in the operations it performs, but is much faster since it has been optimized both for the CPU and GPU. The CPU implementation prioritizes conserving memory, and makes extensive use of pre-allocation for the simulation arrays. Unlike the GPU implementation, it does not allocate a matrix of size `Number of Spins x Number of Time Points` in each block, instead using a for loop to step through time. In contrast, the GPU implementation divides work among as many threads as possible at the beginning of the `run_spin_precession!` and `run_spin_excitation!` functions. For the CPU implementation, this would not be beneficial since there are far less CPU threads available compared with the GPU. Preallocation is also used via the same `prealloc` function used in `BlochCPU.jl`, where a struct of arrays is allocated at the beginning of the simulation that can be re-used in each simulation block. In addition, a `precalc` function is called before moving the simulation objects to the GPU to do certain calculations that are faster on the CPU beforehand. Compared with `BlochSimple`, which only uses array broadcasting for parallelization, `Bloch` also uses kernel-based methods in its `run_spin_excitation!` function for operations which need to be done sequentially. The [kernel implementation](https://github.com/JuliaHealth/KomaMRI.jl/blob/master/KomaMRICore/src/simulation/SimMethods/Bloch/KernelFunctions.jl) uses shared memory to store the necessary arrays for applying the spin excitation for fast memory access, and separates the complex arrays into real and imaginary components to avoid bank conflicts. -The performance differences between Bloch and BlochSimple are illustrated on the KomaMRI [benchmarks page](https://juliahealth.org/KomaMRI.jl/benchmarks/). The first data point is from when `Bloch` was what is now `BlochSimple`, before a more optimized implementation was added. The following three pull requests are primarily responsible for the performance improvements made in between then and now: +The performance differences between Bloch and BlochSimple can be seen on the KomaMRI [benchmarks page](https://juliahealth.org/KomaMRI.jl/benchmarks/). The first data point is from when `Bloch` was what is now `BlochSimple`, before a more optimized implementation was created. The following three pull requests are primarily responsible for the performance differences between `Bloch` and `BlochSimple`: * [(443) Optimize run_spin_precession! and run_spin_excitation! for CPU](https://github.com/JuliaHealth/KomaMRI.jl/pull/443) * [(459) Optimize run_spin_precession! for GPU](https://github.com/JuliaHealth/KomaMRI.jl/pull/459) From 74b81be88854b7c597c6c962babbb3b4a3a0d3dd Mon Sep 17 00:00:00 2001 From: rkierulf Date: Fri, 23 Aug 2024 18:51:57 -0500 Subject: [PATCH 06/11] Update 1-getting-started.md --- docs/src/how-to/1-getting-started.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/src/how-to/1-getting-started.md b/docs/src/how-to/1-getting-started.md index 1ff70956e..e8cf131a4 100644 --- a/docs/src/how-to/1-getting-started.md +++ b/docs/src/how-to/1-getting-started.md @@ -24,10 +24,10 @@ Then press `Ctrl+C` or `backspace` to return to the `julia>` prompt. --- ## My First MRI Simulation -For our first simulation we will use **KomaMRI**'s graphical user interface (GUI). For this, you will first need to load **KomaMRI** by typing `using KomaMRI`, and then launch the GUI with the [`KomaUI`](@ref) function. +For our first simulation we will use **KomaMRI**'s graphical user interface (GUI). For this, you will first need to load **KomaMRI** by typing `using KomaMRI`, and then launch the GUI with the [`KomaUI`](@ref) function. Note that if you want to run simulations on the GPU (for example, using CUDA), then `using CUDA` is also necessary (see (GPU Parallelization)[https://juliahealth.org/KomaMRI.jl/v0.8/explanation/4-gpu-explanation/]). ```julia-repl -julia> using KomaMRI +julia> using KomaMRI, CUDA julia> KomaUI() ``` @@ -45,4 +45,4 @@ Then, press the `Reconstruct!` button and wait until the reconstruction ends. No ![](../assets/ui-view-abs-image.png) -Congratulations, you successfully simulated an MRI acquisition! 🎊 \ No newline at end of file +Congratulations, you successfully simulated an MRI acquisition! 🎊 From 0480faa7f9bd5d9ddecfba80488a3dd3c0242a54 Mon Sep 17 00:00:00 2001 From: Carlos Castillo Passi Date: Mon, 26 Aug 2024 16:54:12 -0400 Subject: [PATCH 07/11] Update docs/src/how-to/1-getting-started.md --- docs/src/how-to/1-getting-started.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/src/how-to/1-getting-started.md b/docs/src/how-to/1-getting-started.md index e8cf131a4..58bb59fa9 100644 --- a/docs/src/how-to/1-getting-started.md +++ b/docs/src/how-to/1-getting-started.md @@ -24,7 +24,7 @@ Then press `Ctrl+C` or `backspace` to return to the `julia>` prompt. --- ## My First MRI Simulation -For our first simulation we will use **KomaMRI**'s graphical user interface (GUI). For this, you will first need to load **KomaMRI** by typing `using KomaMRI`, and then launch the GUI with the [`KomaUI`](@ref) function. Note that if you want to run simulations on the GPU (for example, using CUDA), then `using CUDA` is also necessary (see (GPU Parallelization)[https://juliahealth.org/KomaMRI.jl/v0.8/explanation/4-gpu-explanation/]). +For our first simulation we will use **KomaMRI**'s graphical user interface (GUI). For this, you will first need to load **KomaMRI** by typing `using KomaMRI`, and then launch the GUI with the [`KomaUI`](@ref) function. Note that if you want to run simulations on the GPU (for example, using CUDA), then `using CUDA` is also necessary (see [GPU Parallelization](../explanation/4-gpu-explanation/)). ```julia-repl julia> using KomaMRI, CUDA From 9d6e15c1c06acda6557ae4203abb12b4b20c7833 Mon Sep 17 00:00:00 2001 From: Carlos Castillo Passi Date: Mon, 26 Aug 2024 17:13:04 -0400 Subject: [PATCH 08/11] Update 1-getting-started.md --- docs/src/how-to/1-getting-started.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/src/how-to/1-getting-started.md b/docs/src/how-to/1-getting-started.md index 58bb59fa9..734f0339a 100644 --- a/docs/src/how-to/1-getting-started.md +++ b/docs/src/how-to/1-getting-started.md @@ -24,7 +24,7 @@ Then press `Ctrl+C` or `backspace` to return to the `julia>` prompt. --- ## My First MRI Simulation -For our first simulation we will use **KomaMRI**'s graphical user interface (GUI). For this, you will first need to load **KomaMRI** by typing `using KomaMRI`, and then launch the GUI with the [`KomaUI`](@ref) function. Note that if you want to run simulations on the GPU (for example, using CUDA), then `using CUDA` is also necessary (see [GPU Parallelization](../explanation/4-gpu-explanation/)). +For our first simulation we will use **KomaMRI**'s graphical user interface (GUI). For this, you will first need to load **KomaMRI** by typing `using KomaMRI`, and then launch the GUI with the [`KomaUI`](@ref) function. Note that if you want to run simulations on the GPU (for example, using CUDA), then `using CUDA` is also necessary (see [GPU Parallelization](../explanation/4-gpu-explanation.md)). ```julia-repl julia> using KomaMRI, CUDA From 852893f33c405ff687dbd3430508a89b78c9d0ec Mon Sep 17 00:00:00 2001 From: Carlos Castillo Passi Date: Tue, 27 Aug 2024 01:35:30 -0400 Subject: [PATCH 09/11] Update README.md to explain backend selection --- README.md | 54 +++++++----------------------------------------------- 1 file changed, 7 insertions(+), 47 deletions(-) diff --git a/README.md b/README.md index 8c25b395d..53ae82d07 100644 --- a/README.md +++ b/README.md @@ -49,68 +49,28 @@ KomaMRI.jl is a Julia package for highly efficient ⚡ MRI simulations. KomaMRI -## Table of Contents -- [News](#news) -- [Installation](#installation) -- [First run](#first-run) -- [How to Contribute](#how-to-contribute) -- [How to Cite](#how-to-cite) -- [Tested compatibility](#tested-compatibility) - -## News - -- **(7 Dec 2023)** Koma was present in [MRI Together](https://mritogether.esmrmb.org/) 😼. The talk is available [here](https://www.youtube.com/watch?v=9mRQH8um4-A). Also, I uploaded the promised [educational example](https://juliahealth.org/KomaMRI.jl/stable/tutorial-pluto/01-gradient-echo-spin-echo/). -- **(17 Nov 2023)** Pretty excited of being part of [ISMRM Pulseq's virtual meeting](https://github.com/pulseq/ISMRM-Virtual-Meeting--November-15-17-2023). The slides available [here](https://github.com/pulseq/ISMRM-Virtual-Meeting--November-15-17-2023/blob/35a8da7eaa0bf42f2127e1338a440ccd4e3ef53c/slides/day3_KomaMRI_simulator_Quantitative_MRI.pdf). -- **(27 Jul 2023)** I gave a talk at MIT 😄 for [JuliaCon 2023](https://juliacon.org/2023/)! A video of the presentation can be seen [here](https://www.youtube.com/watch?v=WVT9wJegC6Q). -- **(29 Jun 2023)** [KomaMRI.jl's paper](https://onlinelibrary.wiley.com/doi/10.1002/mrm.29635) was chosen as a July editor's pick in MRM 🥳! -- **(6 Mar 2023)** Paper published in MRM 😃! -- **(8 Dec 2022)** [KomaMRI v0.7](https://github.com/JuliaHealth/KomaMRI.jl/releases/tag/v0.7.0): improved performance (**5x faster**), type stability, extensibility, and more! -- **(17 May 2022)** [ISMRM 2022 digital poster](https://archive.ismrm.org/2022/2815.html) presented in London, UK. Recording [here!](https://www.youtube.com/watch?v=tH_XUnoSJK8). Name change [MRIsim.jl -> KomaMRI.jl](https://github.com/JuliaHealth/KomaMRI.jl/releases/tag/v0.6.0). -- **(Aug 2020)** [Prehistoric version](https://github.com/JuliaHealth/KomaMRI.jl/releases/tag/v0.2.1-alpha) of Koma, MRIsim, presented as an [ISMRM 2020 digital poster](https://cds.ismrm.org/protected/20MProceedings/PDFfiles/4437.html) (virtual conference). - -
- ☰ Roadmap - - v1.0: - - [x] Phantom and Sequence data types, - - [x] Spin precession in gradient-only blocks (simulation optimization), - - [x] GPU acceleration using CUDA.jl, - - [x] RF excitation, - - [x] GPU accelaration of RF excitation, - - [x] Scanner data-type: , etc., - - [x] [Pulseq](https://github.com/imr-framework/pypulseq) IO, - - [x] Signal "Raw Output" dictionary ([ISMRMRD](https://ismrmrd.github.io/)), - - [x] [MRIReco.jl](https://magneticresonanceimaging.github.io/MRIReco.jl/latest/) for the reconstruciton, - - [ ] Documentation, - - [ ] [Auxiliary Pulseq functions](https://github.com/imr-framework/pypulseq/tree/master/pypulseq), - - [ ] Coil sensitivities, - - [ ] Cardiac phantoms and triggers. - - [ ] decay, - - Next: - - [ ] Diffusion models with Laplacian Eigen Functions, - - [ ] Magnetic susceptibility, - - [ ] Use [PackageCompiler.jl](https://julialang.github.io/PackageCompiler.jl/dev/apps.html) to build a ditributable core or app. - -
- - ## Installation To install, just **type** `] add KomaMRI` in the Julia REPL or copy-paste the following into the Julia REPL: ```julia pkg> add KomaMRI +pkg> add CUDA # Optional: Install desired GPU backend (CUDA, AMDGPU, Metal, or oneAPI) + ``` -For more information about installation instructions, refer to the section [Getting Started](https://JuliaHealth.github.io/KomaMRI.jl/stable/getting-started/) of the documentation. +For more information about installation instructions, refer to the section [Getting Started](https://JuliaHealth.github.io/KomaMRI.jl/dev/how-to/1-getting-started) of the documentation. ## First run KomaMRI.jl features a convenient GUI with predefined simulation inputs (i.e. `Sequence`, `Phantom`, and `Scanner`). To launch the GUI, use the following command: ```julia using KomaMRI +using CUDA # Optional: Load GPU backend (default: CPU) KomaUI() ``` Press the button that says "Simulate!" to do your first simulation :). Then, a notification will emerge telling you that the simulation was successful. In this notification, you can either select to (1) see the Raw Data or (2) to proceed with the reconstruction. +> [!IMPORTANT] +> Starting from **KomaMRI v0.9** we are using [package extensions](https://pkgdocs.julialang.org/v1/creating-packages/#Conditional-loading-of-code-in-packages-(Extensions)) to deal with GPU dependencies, meaning that to run simulations on the GPU, installing (`add CUDA/AMDGPU/Metal/oneAPI`) and loading (`using CUDA/AMDGPU/Metal/oneAPI`) the desired backend will be necessary (see [GPU Parallelization](https://JuliaHealth.github.io/KomaMRI.jl/dev/explanation/4-gpu-explanation) and [Tested compatibility](README.md##tested-compatibility)). + ## How to Contribute KomaMRI exists thanks to all our contributors: From 77f8fa70ee2ab6152505ac0411b43088b67763bb Mon Sep 17 00:00:00 2001 From: Carlos Castillo Passi Date: Tue, 27 Aug 2024 01:38:19 -0400 Subject: [PATCH 10/11] Fixed link in README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 53ae82d07..ab0dbb87d 100644 --- a/README.md +++ b/README.md @@ -69,7 +69,7 @@ KomaUI() Press the button that says "Simulate!" to do your first simulation :). Then, a notification will emerge telling you that the simulation was successful. In this notification, you can either select to (1) see the Raw Data or (2) to proceed with the reconstruction. > [!IMPORTANT] -> Starting from **KomaMRI v0.9** we are using [package extensions](https://pkgdocs.julialang.org/v1/creating-packages/#Conditional-loading-of-code-in-packages-(Extensions)) to deal with GPU dependencies, meaning that to run simulations on the GPU, installing (`add CUDA/AMDGPU/Metal/oneAPI`) and loading (`using CUDA/AMDGPU/Metal/oneAPI`) the desired backend will be necessary (see [GPU Parallelization](https://JuliaHealth.github.io/KomaMRI.jl/dev/explanation/4-gpu-explanation) and [Tested compatibility](README.md##tested-compatibility)). +> Starting from **KomaMRI v0.9** we are using [package extensions](https://pkgdocs.julialang.org/v1/creating-packages/#Conditional-loading-of-code-in-packages-(Extensions)) to deal with GPU dependencies, meaning that to run simulations on the GPU, installing (`add CUDA/AMDGPU/Metal/oneAPI`) and loading (`using CUDA/AMDGPU/Metal/oneAPI`) the desired backend will be necessary (see [GPU Parallelization](https://JuliaHealth.github.io/KomaMRI.jl/dev/explanation/4-gpu-explanation) and [Tested compatibility](#tested-compatibility)). ## How to Contribute KomaMRI exists thanks to all our contributors: From d3268f2f07f0715dc91884e6e0a1826b126fd4de Mon Sep 17 00:00:00 2001 From: Carlos Castillo Passi Date: Tue, 27 Aug 2024 13:44:14 -0400 Subject: [PATCH 11/11] Reverted deleted README.md info --- README.md | 46 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 46 insertions(+) diff --git a/README.md b/README.md index ab0dbb87d..8fb30721b 100644 --- a/README.md +++ b/README.md @@ -49,6 +49,52 @@ KomaMRI.jl is a Julia package for highly efficient ⚡ MRI simulations. KomaMRI +## Table of Contents +- [News](#news) +- [Installation](#installation) +- [First run](#first-run) +- [How to Contribute](#how-to-contribute) +- [How to Cite](#how-to-cite) +- [Tested compatibility](#tested-compatibility) + +## News + +- **(7 Dec 2023)** Koma was present in [MRI Together](https://mritogether.esmrmb.org/) 😼. The talk is available [here](https://www.youtube.com/watch?v=9mRQH8um4-A). Also, I uploaded the promised [educational example](https://juliahealth.org/KomaMRI.jl/stable/tutorial-pluto/01-gradient-echo-spin-echo/). +- **(17 Nov 2023)** Pretty excited of being part of [ISMRM Pulseq's virtual meeting](https://github.com/pulseq/ISMRM-Virtual-Meeting--November-15-17-2023). The slides available [here](https://github.com/pulseq/ISMRM-Virtual-Meeting--November-15-17-2023/blob/35a8da7eaa0bf42f2127e1338a440ccd4e3ef53c/slides/day3_KomaMRI_simulator_Quantitative_MRI.pdf). +- **(27 Jul 2023)** I gave a talk at MIT 😄 for [JuliaCon 2023](https://juliacon.org/2023/)! A video of the presentation can be seen [here](https://www.youtube.com/watch?v=WVT9wJegC6Q). +- **(29 Jun 2023)** [KomaMRI.jl's paper](https://onlinelibrary.wiley.com/doi/10.1002/mrm.29635) was chosen as a July editor's pick in MRM 🥳! +- **(6 Mar 2023)** Paper published in MRM 😃! +- **(8 Dec 2022)** [KomaMRI v0.7](https://github.com/JuliaHealth/KomaMRI.jl/releases/tag/v0.7.0): improved performance (**5x faster**), type stability, extensibility, and more! +- **(17 May 2022)** [ISMRM 2022 digital poster](https://archive.ismrm.org/2022/2815.html) presented in London, UK. Recording [here!](https://www.youtube.com/watch?v=tH_XUnoSJK8). Name change [MRIsim.jl -> KomaMRI.jl](https://github.com/JuliaHealth/KomaMRI.jl/releases/tag/v0.6.0). +- **(Aug 2020)** [Prehistoric version](https://github.com/JuliaHealth/KomaMRI.jl/releases/tag/v0.2.1-alpha) of Koma, MRIsim, presented as an [ISMRM 2020 digital poster](https://cds.ismrm.org/protected/20MProceedings/PDFfiles/4437.html) (virtual conference). + +
+ ☰ Roadmap + + v1.0: + - [x] Phantom and Sequence data types, + - [x] Spin precession in gradient-only blocks (simulation optimization), + - [x] GPU acceleration using CUDA.jl, + - [x] RF excitation, + - [x] GPU accelaration of RF excitation, + - [x] Scanner data-type: , etc., + - [x] [Pulseq](https://github.com/imr-framework/pypulseq) IO, + - [x] Signal "Raw Output" dictionary ([ISMRMRD](https://ismrmrd.github.io/)), + - [x] [MRIReco.jl](https://magneticresonanceimaging.github.io/MRIReco.jl/latest/) for the reconstruciton, + - [ ] Documentation, + - [ ] [Auxiliary Pulseq functions](https://github.com/imr-framework/pypulseq/tree/master/pypulseq), + - [ ] Coil sensitivities, + - [ ] Cardiac phantoms and triggers. + - [ ] decay, + + Next: + - [ ] Diffusion models with Laplacian Eigen Functions, + - [ ] Magnetic susceptibility, + - [ ] Use [PackageCompiler.jl](https://julialang.github.io/PackageCompiler.jl/dev/apps.html) to build a ditributable core or app. + +
+ + ## Installation To install, just **type** `] add KomaMRI` in the Julia REPL or copy-paste the following into the Julia REPL: