Skip to content

Commit

Permalink
Merge pull request #22 from guanhuaw/feature/precommit
Browse files Browse the repository at this point in the history
Feature/precommit
  • Loading branch information
guanhuaw authored Feb 29, 2024
2 parents 036c9bc + 10a6c59 commit 787ce33
Show file tree
Hide file tree
Showing 79 changed files with 1,235 additions and 894 deletions.
37 changes: 37 additions & 0 deletions .github/workflows/python-ci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
name: Python-CI

on:
push:
branches:
- main
- master
- develop
- feature/*
pull_request:
branches:
- main
- master
- develop
- feature/*

jobs:
build:

runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.x' # Specify the Python version you need

- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -e .
pip install ruff pytest
- name: Lint with Ruff
run: |
ruff ./mirtorch
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -41,4 +41,3 @@ mrt/
docs/_build
docs/_static
docs/_templates

30 changes: 30 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.0.1 # Use the latest version
hooks:
- id: trailing-whitespace
- id: check-yaml
- id: end-of-file-fixer
- id: check-added-large-files

- repo: https://github.com/psf/black
rev: 22.3.0 # Use the latest version
hooks:
- id: black
language_version: python3
exclude: ^(?:tests|docs|examples)/

- repo: https://github.com/astral-sh/ruff-pre-commit
rev: 'v0.1.5'
hooks:
- id: ruff
types_or: [python, pyi, jupyter]
args: [ --fix, --exit-non-zero-on-fix ]
exclude: ^(?:tests|docs|examples)/


- repo: https://github.com/codespell-project/codespell
rev: v2.1.0
hooks:
- id: codespell
exclude: ^(?:tests|docs|examples)/
2 changes: 1 addition & 1 deletion .readthedocs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,4 @@ sphinx:
python:
version: 3.9
install:
- requirements: docs/requirements.txt
- requirements: docs/requirements.txt
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -48,4 +48,4 @@ Version 3, 29 June 2007

Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
of this license document, but changing it is not allowed.
26 changes: 13 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

A Py***Torch***-based differentiable ***I***mage ***R***econstruction ***T***oolbox, developed at the University of ***M***ichigan.

The work is inspired by [MIRT](https://github.com/JeffFessler/mirt), a well-acclaimed toolbox for medical imaging reconstruction.
The work is inspired by [MIRT](https://github.com/JeffFessler/mirt), a well-acclaimed toolbox for medical imaging reconstruction.

The main objective is to facilitate rapid, data-driven image reconstruction using CPUs and GPUs through fast prototyping and iteration. Researchers can conveniently develop new model-based and learning-based methods (e.g., unrolled neural networks) with abstraction layers. The availability of auto-differentiation enables optimization of imaging protocols and reconstruction parameters using gradient methods.

Expand All @@ -16,7 +16,7 @@ Documentation: https://mirtorch.readthedocs.io/en/latest/
### Installation

We recommend to [pre-install `PyTorch` first](https://pytorch.org/).
To install the `MIRTorch` package, after cloning the repo, please try `pip install -e .`(one may modify the package locally with this option.)
To install the `MIRTorch` package, after cloning the repo, please try `pip install -e .`(one may modify the package locally with this option.)

------

Expand All @@ -30,7 +30,7 @@ Instances include basic linear operations (like convolution), classical imaging

Since the Jacobian matrix of a linear operator is itself, the toolbox can actively calculate such Jacobians during backpropagation, avoiding the large cache cost required by auto-differentiation.

When defining linear operators, please make sure that all torch tensors are on the same device and compatible. For example, `torch.cfloat` are compatible with `torch.float` but not `torch.double`. Similarily, `torch.chalf` is compatible with `torch.half`.
When defining linear operators, please make sure that all torch tensors are on the same device and compatible. For example, `torch.cfloat` are compatible with `torch.float` but not `torch.double`. Similarly, `torch.chalf` is compatible with `torch.half`.
When the data is image, there are 2 empirical formats: `[num_batch, num_channel, nx, ny, (nz)]` and `[nx, ny, (nz)]`.
For some LinearMaps, there is a boolean `batchmode` to control it.

Expand All @@ -44,7 +44,7 @@ Currently, the package includes the conjugate gradient (CG), fast iterative thre

#### Dictionary learning

For dictionary learning-based reconstruction, we implemented an efficient dictionary learning algorithm ([SOUP-DIL](https://arxiv.org/abs/1511.06333)) and orthogonal matching pursuit ([OMP](https://ieeexplore.ieee.org/abstract/document/342465/?casa_token=aTDkQVCM9WEAAAAA:5rXu9YikP822bCBvkhYxKWlBTJ6Fn6baTQJ9kuNrU7K-64EmGOAczYvF2dTW3al3PfPdwJAiYw)). Due to PyTorch’s limited support of sparse matrices, we use SciPy as the backend.
For dictionary learning-based reconstruction, we implemented an efficient dictionary learning algorithm ([SOUP-DIL](https://arxiv.org/abs/1511.06333)) and orthogonal matching pursuit ([OMP](https://ieeexplore.ieee.org/abstract/document/342465/?casa_token=aTDkQVCM9WEAAAAA:5rXu9YikP822bCBvkhYxKWlBTJ6Fn6baTQJ9kuNrU7K-64EmGOAczYvF2dTW3al3PfPdwJAiYw)). Due to PyTorch’s limited support of sparse matrices, we use SciPy as the backend.

#### Multi-GPU support

Expand All @@ -56,16 +56,16 @@ Currently, MIRTorch uses `torch.DataParallel` to support multiple GPUs. One may

Generally, MIRTorch solves the image reconstruction problems that have the cost function $\textit{argmin}_{x} \|Ax-y\|_2^2 + \lambda \textit{R}(x)$. $A$ stands for the system matrix. When it is linear, one may use `LinearMap` to efficiently compute it. `y` usually denotes measurements. $\textit{R}(\cdot)$ denotes regularizers, which determines which `Alg` to be used. One may refer to [1](https://web.eecs.umich.edu/~fessler/book/), [2](https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf) and [3](https://www.youtube.com/watch?v=J6_5rPYnr_s) for more tutorials on optimization.

Here we provide several notebook tutorials focused on MRI, where $A$ is FFT or NUFFT.
Here we provide several notebook tutorials focused on MRI, where $A$ is FFT or NUFFT.

- `/example/demo_mnist.ipynb` shows the LASSO on MNIST with FISTA and POGM.
- `/example/demo_mnist.ipynb` shows the LASSO on MNIST with FISTA and POGM.
- `/example/demo_mri.ipynb` contains the SENSE (CG-SENSE) and **B0**-informed reconstruction with penalized weighted least squares (*PWLS*).
- `/example/demo_3d.ipynb` contains the 3d non-Cartesian MR reconstruction. *New!* Try the Toeplitz-embedding version of B0-informed reconstruction, which reduce hour-long recon to 5 secs.
- `/example/demo_cs.ipynb` shows the compressed sensing reconstruction of under-determined MRI signals.
- `/example/demo_dl.ipynb` exhibits the dictionary learning results.
- `/example/demo_mlem` showcase SPECT recon algorithms, including EM and CNN.

Since MIRTorch is differentiable, one may use AD to update many parameters. For example, updating the reconstruction neural network's weights. More importantly, one may update the imaging system itself via gradient-based and data-driven methods. As a user case, [Bjork repo](https://github.com/guanhuaw/Bjork) contains MRI sampling pattern optimization examples. One may use the reconstruction loss as the objective function to jointly optimize reconstruction algorithms and the sampling pattern. See [this video](https://www.youtube.com/watch?v=sLFOf5EvVAs) on how to jointly optimize reconstruction and acquisition.
Since MIRTorch is differentiable, one may use AD to update many parameters. For example, updating the reconstruction neural network's weights. More importantly, one may update the imaging system itself via gradient-based and data-driven methods. As a user case, [Bjork repo](https://github.com/guanhuaw/Bjork) contains MRI sampling pattern optimization examples. One may use the reconstruction loss as the objective function to jointly optimize reconstruction algorithms and the sampling pattern. See [this video](https://www.youtube.com/watch?v=sLFOf5EvVAs) on how to jointly optimize reconstruction and acquisition.

------

Expand All @@ -86,8 +86,8 @@ If the code is useful to your research, please cite:
```bibtex
@article{wang:22:bjork,
author={Wang, Guanhua and Luo, Tianrui and Nielsen, Jon-Fredrik and Noll, Douglas C. and Fessler, Jeffrey A.},
journal={IEEE Transactions on Medical Imaging},
title={B-spline Parameterized Joint Optimization of Reconstruction and K-space Trajectories ({BJORK}) for Accelerated {2D} {MRI}},
journal={IEEE Transactions on Medical Imaging},
title={B-spline Parameterized Joint Optimization of Reconstruction and K-space Trajectories ({BJORK}) for Accelerated {2D} {MRI}},
year={2022},
pages={1-1},
doi={10.1109/TMI.2022.3161875}}
Expand All @@ -97,7 +97,7 @@ If the code is useful to your research, please cite:
@inproceedings{wang:22:mirtorch,
title={{MIRTorch}: A {PyTorch}-powered Differentiable Toolbox for Fast Image Reconstruction and Scan Protocol Optimization},
author={Wang, Guanhua and Shah, Neel and Zhu, Keyue and Noll, Douglas C. and Fessler, Jeffrey A.},
booktitle={Proc. Intl. Soc. Magn. Reson. Med. (ISMRM)},
booktitle={Proc. Intl. Soc. Magn. Resonance. Med. (ISMRM)},
pages={4982},
year={2022}
}
Expand All @@ -106,8 +106,8 @@ If the code is useful to your research, please cite:
```bibtex
@ARTICLE{li:23:tet,
author={Li, Zongyu and Dewaraja, Yuni K. and Fessler, Jeffrey A.},
journal={IEEE Transactions on Radiation and Plasma Medical Sciences},
title={Training End-to-End Unrolled Iterative Neural Networks for SPECT Image Reconstruction},
journal={IEEE Transactions on Radiation and Plasma Medical Sciences},
title={Training End-to-End Unrolled Iterative Neural Networks for SPECT Image Reconstruction},
year={2023},
volume={7},
number={4},
Expand All @@ -120,4 +120,4 @@ If the code is useful to your research, please cite:

### License

This package uses the BSD3 license.
This package uses the BSD3 license.
3 changes: 0 additions & 3 deletions docs/API.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,3 @@ API
Prox
Algorithms
DL



3 changes: 0 additions & 3 deletions docs/DL.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,3 @@ Dictionary learning methods
:nosignatures:

mirtorch.dic.soup



2 changes: 1 addition & 1 deletion docs/HISTORY.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,4 @@
0.0.1 (2022-02-04)
---
- Add the readthedocs documentation
- Add the preconditioner to CG
- Add the preconditioner to CG
7 changes: 0 additions & 7 deletions docs/LinearMap.rst
Original file line number Diff line number Diff line change
Expand Up @@ -68,10 +68,3 @@ CT system models
:nosignatures:

mirtorch.linear.ct.Bdd







22 changes: 11 additions & 11 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

A Py***Torch***-based differentiable ***I***mage ***R***econstruction ***T***oolbox, developed at the University of ***M***ichigan.

The work is inspired by [MIRT](https://github.com/JeffFessler/mirt), a well-acclaimed toolbox for medical imaging reconstruction.
The work is inspired by [MIRT](https://github.com/JeffFessler/mirt), a well-acclaimed toolbox for medical imaging reconstruction.

The main objective is to facilitate rapid, data-driven image reconstruction using CPUs and GPUs through fast prototyping and iteration. Researchers can conveniently develop new model-based and learning-based methods (e.g., unrolled neural networks) with abstraction layers. The availability of auto-differentiation enables optimization of imaging protocols and reconstruction parameters using gradient methods.

Expand All @@ -16,7 +16,7 @@ Documentation: https://mirtorch.readthedocs.io/en/latest/
### Installation

We recommend to [pre-install `PyTorch` first](https://pytorch.org/).
To install the `MIRTorch` package, after cloning the repo, please try `pip install -e .`(one can modify the package locally with this `-e` option.)
To install the `MIRTorch` package, after cloning the repo, please try `pip install -e .`(one can modify the package locally with this `-e` option.)

------

Expand Down Expand Up @@ -44,7 +44,7 @@ Currently, the package includes the conjugate gradient (CG), fast iterative thre

#### Dictionary learning

For dictionary learning-based reconstruction, we implemented an efficient dictionary learning algorithm ([SOUP-DIL](https://arxiv.org/abs/1511.06333)) and orthogonal matching pursuit ([OMP](https://ieeexplore.ieee.org/abstract/document/342465/?casa_token=aTDkQVCM9WEAAAAA:5rXu9YikP822bCBvkhYxKWlBTJ6Fn6baTQJ9kuNrU7K-64EmGOAczYvF2dTW3al3PfPdwJAiYw)). Due to PyTorch’s limited support of sparse matrices, we use SciPy as the backend.
For dictionary learning-based reconstruction, we implemented an efficient dictionary learning algorithm ([SOUP-DIL](https://arxiv.org/abs/1511.06333)) and orthogonal matching pursuit ([OMP](https://ieeexplore.ieee.org/abstract/document/342465/?casa_token=aTDkQVCM9WEAAAAA:5rXu9YikP822bCBvkhYxKWlBTJ6Fn6baTQJ9kuNrU7K-64EmGOAczYvF2dTW3al3PfPdwJAiYw)). Due to PyTorch’s limited support of sparse matrices, we use SciPy as the backend.

#### Multi-GPU support

Expand All @@ -56,16 +56,16 @@ Currently, MIRTorch uses `torch.DataParallel` to support multiple GPUs. One may

Generally, MIRTorch solves the image reconstruction problems that have the cost function $\textit{argmin}_{x} \|Ax-y\|_2^2 + \lambda \textit{R}(x)$. $A$ stands for the system matrix. When it is linear, one may use `LinearMap` to efficiently compute it. `y` usually denotes measurements. $\textit{R}(\cdot)$ denotes regularizers, which determines which `Alg` to be used. One may refer to [1](https://web.eecs.umich.edu/~fessler/book/), [2](https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf) and [3](https://www.youtube.com/watch?v=J6_5rPYnr_s) for more tutorials on optimization.

Here we provide several notebook tutorials focused on MRI, where $A$ is FFT or NUFFT.
Here we provide several notebook tutorials focused on MRI, where $A$ is FFT or NUFFT.

- `/example/demo_mnist.ipynb` shows the LASSO on MNIST with FISTA and POGM.
- `/example/demo_mnist.ipynb` shows the LASSO on MNIST with FISTA and POGM.
- `/example/demo_mri.ipynb` contains the SENSE (CG-SENSE) and **B0**-informed reconstruction with penalized weighted least squares (*PWLS*).
- `/example/demo_3d.ipynb` contains the 3d non-Cartesian MR reconstruction. *New!* Try the Toeplitz-embedding version of B0-informed reconstruction, which reduce hour-long recon to 5 secs.
- `/example/demo_cs.ipynb` shows the compressed sensing reconstruction of under-determined MRI signals.
- `/example/demo_dl.ipynb` exhibits the dictionary learning results.
- `/example/demo_mlem` showcase SPECT recon algorithms, including EM and CNN.

Since MIRTorch is differentiable, one may use AD to update many parameters. For example, updating the reconstruction neural network's weights. More importantly, one may update the imaging system itself via gradient-based and data-driven methods. As a user case, [Bjork repo](https://github.com/guanhuaw/Bjork) contains MRI sampling pattern optimization examples. One may use the reconstruction loss as the objective function to jointly optimize reconstruction algorithms and the sampling pattern. See [this video](https://www.youtube.com/watch?v=sLFOf5EvVAs) on how to jointly optimize reconstruction and acquisition.
Since MIRTorch is differentiable, one may use AD to update many parameters. For example, updating the reconstruction neural network's weights. More importantly, one may update the imaging system itself via gradient-based and data-driven methods. As a user case, [Bjork repo](https://github.com/guanhuaw/Bjork) contains MRI sampling pattern optimization examples. One may use the reconstruction loss as the objective function to jointly optimize reconstruction algorithms and the sampling pattern. See [this video](https://www.youtube.com/watch?v=sLFOf5EvVAs) on how to jointly optimize reconstruction and acquisition.

------

Expand All @@ -86,8 +86,8 @@ If the code is useful to your research, please cite:
```bibtex
@article{wang:22:bjork,
author={Wang, Guanhua and Luo, Tianrui and Nielsen, Jon-Fredrik and Noll, Douglas C. and Fessler, Jeffrey A.},
journal={IEEE Transactions on Medical Imaging},
title={B-spline Parameterized Joint Optimization of Reconstruction and K-space Trajectories ({BJORK}) for Accelerated {2D} {MRI}},
journal={IEEE Transactions on Medical Imaging},
title={B-spline Parameterized Joint Optimization of Reconstruction and K-space Trajectories ({BJORK}) for Accelerated {2D} {MRI}},
year={2022},
pages={1-1},
doi={10.1109/TMI.2022.3161875}}
Expand All @@ -106,8 +106,8 @@ If the code is useful to your research, please cite:
```bibtex
@ARTICLE{li:23:tet,
author={Li, Zongyu and Dewaraja, Yuni K. and Fessler, Jeffrey A.},
journal={IEEE Transactions on Radiation and Plasma Medical Sciences},
title={Training End-to-End Unrolled Iterative Neural Networks for SPECT Image Reconstruction},
journal={IEEE Transactions on Radiation and Plasma Medical Sciences},
title={Training End-to-End Unrolled Iterative Neural Networks for SPECT Image Reconstruction},
year={2023},
volume={7},
number={4},
Expand All @@ -120,4 +120,4 @@ If the code is useful to your research, please cite:

### License

This package uses the BSD3 license.
This package uses the BSD3 license.
35 changes: 21 additions & 14 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,46 +12,53 @@
#
import os
import sys
sys.path.append(os.path.abspath('../..'))
sys.path.append(os.path.abspath('..'))

sys.path.append(os.path.abspath("../.."))
sys.path.append(os.path.abspath(".."))

# -- Project information -----------------------------------------------------

project = 'MIRTorch'
copyright = '2021, Guanhua Wang, Neel Shah, Jeffrey A. Fessler'
author = 'Guanhua Wang, Neel Shah, Jeffrey A. Fessler'
project = "MIRTorch"
copyright = "2021, Guanhua Wang, Neel Shah, Jeffrey A. Fessler"
author = "Guanhua Wang, Neel Shah, Jeffrey A. Fessler"


# -- General configuration ---------------------------------------------------

# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ["sphinx.ext.napoleon", "myst_parser", "sphinx.ext.autodoc", "sphinx.ext.autosummary", 'sphinx.ext.mathjax']
extensions = [
"sphinx.ext.napoleon",
"myst_parser",
"sphinx.ext.autodoc",
"sphinx.ext.autosummary",
"sphinx.ext.mathjax",
]

# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
templates_path = ["_templates"]

# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]


# -- Options for HTML output -------------------------------------------------

# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
html_theme = "sphinx_rtd_theme"

# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
html_static_path = ["_static"]

source_suffix = {
'.rst': 'restructuredtext',
'.txt': 'restructuredtext',
'.md': 'markdown',
}
".rst": "restructuredtext",
".txt": "restructuredtext",
".md": "markdown",
}
12 changes: 3 additions & 9 deletions docs/generated/mirtorch.alg.CG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,19 +5,13 @@

.. autoclass:: CG


.. automethod:: __init__


.. rubric:: Methods

.. autosummary::

~CG.__init__
~CG.run






Loading

0 comments on commit 787ce33

Please sign in to comment.