Skip to content

Commit

Permalink
Merge branch 'master' into finufft
Browse files Browse the repository at this point in the history
  • Loading branch information
chaithyagr authored Feb 10, 2025
2 parents 5dc9da9 + c56f1ba commit 7187491
Show file tree
Hide file tree
Showing 42 changed files with 3,174 additions and 999 deletions.
29 changes: 18 additions & 11 deletions .github/workflows/test-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -98,9 +98,9 @@ jobs:
shell: bash
run: |
source $RUNNER_WORKSPACE/venv/bin/activate
export CUDA_BIN_PATH=/usr/local/cuda-12.1/
export PATH=/usr/local/cuda-12.1/bin/:${PATH}
export LD_LIBRARY_PATH=/usr/local/cuda-12.1/lib64/:${LD_LIBRARY_PATH}
export CUDA_BIN_PATH=/usr/local/cuda-12.4/
export PATH=/usr/local/cuda-12.4/bin/:${PATH}
export LD_LIBRARY_PATH=/usr/local/cuda-12.4/lib64/:${LD_LIBRARY_PATH}
pip install -e .[${{ matrix.backend }},autodiff]
- name: Run Tests
Expand Down Expand Up @@ -162,11 +162,18 @@ jobs:
path: ~/.cache/brainweb
key: ${{ runner.os }}-Brainweb

- name: Point to CUDA 12.1 #TODO: This can be combined from other jobs
- name: Install Python deps
shell: bash
run: |
python -m pip install --upgrade pip
python -m pip install -e .[extra,test,dev]
python -m pip install finufft pooch brainweb-dl torch fastmri
- name: Install GPU related interfaces
run: |
export CUDA_BIN_PATH=/usr/local/cuda-12.1/
export PATH=/usr/local/cuda-12.1/bin/:${PATH}
export LD_LIBRARY_PATH=/usr/local/cuda-12.1/lib64/:${LD_LIBRARY_PATH}
export CUDA_BIN_PATH=/usr/local/cuda-12.4/
export PATH=/usr/local/cuda-12.4/bin/:${PATH}
export LD_LIBRARY_PATH=/usr/local/cuda-12.4/lib64/:${LD_LIBRARY_PATH}
- name: Install Python deps
shell: bash
Expand Down Expand Up @@ -270,11 +277,11 @@ jobs:
with:
python-version: "3.10"

- name: Point to CUDA 12.1
- name: Point to CUDA 12.4
run: |
export CUDA_BIN_PATH=/usr/local/cuda-12.1/
export PATH=/usr/local/cuda-12.1/bin/:${PATH}
export LD_LIBRARY_PATH=/usr/local/cuda-12.1/lib64/:${LD_LIBRARY_PATH}
export CUDA_BIN_PATH=/usr/local/cuda-12.4/
export PATH=/usr/local/cuda-12.4/bin/:${PATH}
export LD_LIBRARY_PATH=/usr/local/cuda-12.4/lib64/:${LD_LIBRARY_PATH}
- name: Install dependencies
shell: bash -l {0}
Expand Down
26 changes: 13 additions & 13 deletions docs/nufft.rst
Original file line number Diff line number Diff line change
Expand Up @@ -174,14 +174,14 @@ To achieve a clinically feasible scan time, each frame or contrast is acquired w
.. math::
\tilde{\boldsymbol{y}} = \begin{bmatrix}
\mathcal{F}_\Omega_1 & 0 & \cdots & 0 \\
0 & \mathcal{F}_\Omega_2 & \cdots & 0 \\
\mathcal{F}_{\Omega_1} & 0 & \cdots & 0 \\
0 & \mathcal{F}_{\Omega_2} & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & \mathcal{F}_\Omega_T
0 & 0 & \cdots & \mathcal{F}_{\Omega_T}
\end{bmatrix}
\boldsymbol{x} + \boldsymbol{n}
where :math:`\mathcal{F}_\Omega_1, \dots, \mathcal{F}_\Omega_T` are the Fourier operators corresponding to each individual frame. Some applications (e.g., MR Fingerprinting [3]_) may consists of
where :math:`\mathcal{F}_{\Omega_1}, \dots, \mathcal{F}_{\Omega_T}` are the Fourier operators corresponding to each individual frame. Some applications (e.g., MR Fingerprinting [3]_) may consists of
thousands of total frames :math:`T`, leading to repeated Fourier Transform operations and high computational burden. However, the 1D signal series arising from similar voxels, e.g., with similar
relaxation properties, are typically highly correlated. For this reason, the image series can be represented as:

Expand All @@ -195,17 +195,17 @@ training dataset or an ensemble of simulated Bloch responses. The signal model c
.. math::
\tilde{\boldsymbol{y}} = \begin{bmatrix}
\mathcal{F}_\Omega_1 & 0 & \cdots & 0 \\
0 & \mathcal{F}_\Omega_2 & \cdots & 0 \\
\mathcal{F}_{\Omega_1} & 0 & \cdots & 0 \\
0 & \mathcal{F}_{\Omega_2} & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & \mathcal{F}_\Omega_T
0 & 0 & \cdots & \mathcal{F}_{\Omega_T}
\end{bmatrix}
\Phi \Phi^H \boldsymbol{x} + \boldsymbol{n} =
\begin{bmatrix}
\mathcal{F}_\Omega_1 & 0 & \cdots & 0 \\
0 & \mathcal{F}_\Omega_2 & \cdots & 0 \\
\mathcal{F}_{\Omega_1} & 0 & \cdots & 0 \\
0 & \mathcal{F}_{\Omega_2} & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & \mathcal{F}_\Omega_T
0 & 0 & \cdots & \mathcal{F}_{\Omega_T}
\end{bmatrix}
\Phi \boldsymbol{\alpha} + \boldsymbol{n}
Expand All @@ -214,15 +214,15 @@ the projection operator :math:`\boldsymbol{\Phi}` commutes with the Fourier tran

.. math::
\tilde{\boldsymbol{y}} = \Phi \mathcal{F}_Omega(\boldsymbol{\alpha}) + \boldsymbol{n}
\tilde{\boldsymbol{y}} = \Phi \mathcal{F}_\Omega(\boldsymbol{\alpha}) + \boldsymbol{n}
that is, computation now involves :math:`K \ll T` Fourier Transform operations, each with the same sampling trajectory, which can be computed by levaraging efficient NUFFT implementations for conventional static MRI.

The Non Uniform Fast Fourier Transform
======================================


In order to lower the computational cost of the Non-Uniform Fourier Transform, the main idea is to move back to a regular grid where an FFT would be performed (going from a typical :math:`O(MN)` complexity to `O(M\log(N))`). Thus, the main steps of the *Non-Uniform Fast Fourier Transform* are for the type 1:
In order to lower the computational cost of the Non-Uniform Fourier Transform, the main idea is to move back to a regular grid where an FFT would be performed (going from a typical :math:`O(MN)` complexity to :math:`O(M\log(N))`). Thus, the main steps of the *Non-Uniform Fast Fourier Transform* are for the type 1:

1. Spreading/Interpolation of the non-uniform points to an oversampled Cartesian grid (typically with twice the resolution of the final image)
2. Perform the (I)FFT on this image
Expand Down Expand Up @@ -259,4 +259,4 @@ References
.. [1] https://en.m.wikipedia.org/wiki/Non-uniform_discrete_Fourier_transform
.. [2] Noll, D. C., Meyer, C. H., Pauly, J. M., Nishimura, D. G., Macovski, A., "A homogeneity correction method for magnetic resonance imaging with time-varying gradients", IEEE Transaction on Medical Imaging (1991), pp. 629-637.
.. [3] Fessler, J. A., Lee, S., Olafsson, V. T., Shi, H. R., Noll, D. C., "Toeplitz-based iterative image reconstruction for MRI with correction for magnetic field inhomogeneity", IEEE Transactions on Signal Processing 53.9 (2005), pp. 3393–3402.
.. [4] D. F. McGivney et al., "SVD Compression for Magnetic Resonance Fingerprinting in the Time Domain," IEEE Transactions on Medical Imaging (2014), pp. 2311-2322.
.. [4] D. F. McGivney et al., "SVD Compression for Magnetic Resonance Fingerprinting in the Time Domain," IEEE Transactions on Medical Imaging (2014), pp. 2311-2322.
10 changes: 5 additions & 5 deletions docs/paper-joss/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,15 +32,15 @@ authors:
affiliation: "1,2"

affiliations:
- name: MIND, Inria
- name: MIND, Inria, France
index: 1
- name: Université Paris-Saclay / CEA
- name: Université Paris-Saclay / CEA, France
index: 2
- name: Chipiron
- name: Chipiron, France
index: 3
- name: INFN, Pisa Division
- name: INFN, Pisa Division, Italy
index: 4
- name: Siemens Healthineers
- name: Siemens Healthineers, Princeton, United States of America
index: 5

date: 20 September 2024
Expand Down
28 changes: 14 additions & 14 deletions examples/GPU/example_fastMRI_UNet.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,30 +4,30 @@
Simple UNet model.
==================
This model is a simplified version of the U-Net architecture,
which is widely used for image segmentation tasks.
This is implemented in the proprietary FASTMRI package [fastmri]_.
The U-Net model consists of an encoder (downsampling path) and
a decoder (upsampling path) with skip connections between corresponding
layers in the encoder and decoder.
These skip connections help in retaining spatial information
This model is a simplified version of the U-Net architecture,
which is widely used for image segmentation tasks.
This is implemented in the proprietary FASTMRI package [fastmri]_.
The U-Net model consists of an encoder (downsampling path) and
a decoder (upsampling path) with skip connections between corresponding
layers in the encoder and decoder.
These skip connections help in retaining spatial information
that is lost during the downsampling process.
The primary purpose of this model is to perform image reconstruction tasks,
specifically for MRI images.
It takes an input MRI image and reconstructs it to improve the image quality
The primary purpose of this model is to perform image reconstruction tasks,
specifically for MRI images.
It takes an input MRI image and reconstructs it to improve the image quality
or to recover missing parts of the image.
This implementation of the UNet model was pulled from the FastMRI Facebook
repository, which is a collaborative research project aimed at advancing
This implementation of the UNet model was pulled from the FastMRI Facebook
repository, which is a collaborative research project aimed at advancing
the field of medical imaging using machine learning techniques.
.. math::
\mathbf{\hat{x}} = \mathrm{arg} \min_{\mathbf{x}} || \mathcal{U}_\mathbf{\theta}(\mathbf{y}) - \mathbf{x} ||_2^2
where :math:`\mathbf{\hat{x}}` is the reconstructed MRI image, :math:`\mathbf{x}` is the ground truth image,
where :math:`\mathbf{\hat{x}}` is the reconstructed MRI image, :math:`\mathbf{x}` is the ground truth image,
:math:`\mathbf{y}` is the input MRI image (e.g., k-space data), and :math:`\mathcal{U}_\mathbf{\theta}` is the U-Net model parameterized by :math:`\theta`.
.. warning::
Expand Down
4 changes: 2 additions & 2 deletions examples/GPU/example_learn_samples.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,15 @@
======================
A small pytorch example to showcase learning k-space sampling patterns.
This example showcases the auto-diff capabilities of the NUFFT operator
This example showcases the auto-diff capabilities of the NUFFT operator
wrt to k-space trajectory in mri-nufft.
In this example, we solve the following optimization problem:
.. math::
\mathbf{\hat{K}} = \mathrm{arg} \min_{\mathbf{K}} || \mathcal{F}_\mathbf{K}^* D_\mathbf{K} \mathcal{F}_\mathbf{K} \mathbf{x} - \mathbf{x} ||_2^2
where :math:`\mathcal{F}_\mathbf{K}` is the forward NUFFT operator and :math:`D_\mathbf{K}` is the density compensators for trajectory :math:`\mathbf{K}`, :math:`\mathbf{x}` is the MR image which is also the target image to be reconstructed.
.. warning::
Expand Down
10 changes: 5 additions & 5 deletions examples/GPU/example_learn_samples_multicoil.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,24 +5,24 @@
=========================================
A small pytorch example to showcase learning k-space sampling patterns.
This example showcases the auto-diff capabilities of the NUFFT operator
This example showcases the auto-diff capabilities of the NUFFT operator
wrt to k-space trajectory in mri-nufft.
Briefly, in this example we try to learn the k-space samples :math:`\mathbf{K}` for the following cost function:
.. math::
\mathbf{\hat{K}} = arg \min_{\mathbf{K}} || \sum_{\ell=1}^LS_\ell^* \mathcal{F}_\mathbf{K}^* D_\mathbf{K} \mathcal{F}_\mathbf{K} x_\ell - \mathbf{x}_{sos} ||_2^2
\mathbf{\hat{K}} = arg \min_{\mathbf{K}} || \sum_{\ell=1}^LS_\ell^* \mathcal{F}_\mathbf{K}^* D_\mathbf{K} \mathcal{F}_\mathbf{K} x_\ell - \mathbf{x}_{sos} ||_2^2
where :math:`S_\ell` is the sensitivity map for the :math:`\ell`-th coil, :math:`\mathcal{F}_\mathbf{K}` is the forward NUFFT operator and :math:`D_\mathbf{K}` is the density compensators for trajectory :math:`\mathbf{K}`, :math:`\mathbf{x}_\ell` is the image for the :math:`\ell`-th coil, and :math:`\mathbf{x}_{sos} = \sqrt{\sum_{\ell=1}^L x_\ell^2}` is the sum-of-squares image as target image to be reconstructed.
In this example, the forward NUFFT operator :math:`\mathcal{F}_\mathbf{K}` is implemented with `model.operator` while the SENSE operator :math:`model.sense_op` models the term :math:`\mathbf{A} = \sum_{\ell=1}^LS_\ell^* \mathcal{F}_\mathbf{K}^* D_\mathbf{K}`.
For our data, we use a 2D slice of a 3D MRI image from the BrainWeb dataset, and the sensitivity maps are simulated using the `birdcage_maps` function from `sigpy.mri`.
.. note::
To showcase the features of ``mri-nufft``, we use ``
"cufinufft"`` backend for ``model.operator`` without density compensation and ``"gpunufft"`` backend for ``model.sense_op`` with density compensation.
"cufinufft"`` backend for ``model.operator`` without density compensation and ``"gpunufft"`` backend for ``model.sense_op`` with density compensation.
.. warning::
This example only showcases the autodiff capabilities, the learned sampling pattern is not scanner compliant as the scanner gradients required to implement it violate the hardware constraints. In practice, a projection :math:`\Pi_\mathcal{Q}(\mathbf{K})` into the scanner constraints set :math:`\mathcal{Q}` is recommended (see [Proj]_). This is implemented in the proprietary SPARKLING package [Sparks]_. Users are encouraged to contact the authors if they want to use it.
"""
Expand Down
Loading

1 comment on commit 7187491

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Artifacts

Please sign in to comment.