This is the official PyTorch implementation of our manuscipt:
Promoting fast MR imaging pipeline by full-stack AI
Zhiwen Wang, Bowen Li, Hui Yu, Zhongzhou Zhang, Maosong Ran, Wenjun Xia, Ziyuan Yang, Jingfeng Lu, Hu Chen, Jinfeng Lu, Jiliu Zhou, Hongming Shan, Yi Zhang
Accepted by Cell iScience
git clone https://github.com/wangzhiwen-scu/FSL.git
cd fsl
Here's a summary of the key dependencies.
- python 3.7
- pytorch 1.7.1
We recommend using conda to install all of the dependencies.
conda env create -f environment.yaml
To activate the environment, run:
conda activate fsl
All data and models can be downloaded in Google-drive.
It is a zip file
(~843M) which contain a demo testing data
and parameter files of compared models
.
Then place the demo testing data
in:
├── datasets
│ ├── brain
│ │ ├── OASI1_MRB
│ │ ├── testing-h5py
│ │ │ ├── demo
│ │ │ │ └── oasis1_disc1_OAS1_0042_MR1.h5
│ ├── cardiac
│ └── prostate
place the parameter files
in:
├── model_zoo
│ ├── pretrained_seg
│ │ └── OASI1_MRB_3seg.pth
│ └── tab1
│ └── OASI1_MRB
│ ├── asl_ablation_seqmdrecnet_bg_step3_1_local__0.05_2D.pth
│ ├── csl_seqmri_unet__0.05_2D.pth
│ ├── csmri1__0.05.pth
│ ├── csmri2__5.pth
│ └── csmtl__0.05.pth
Please see runner/main/asl_mixed_ablation_seq_mdrec_v2_step3_1_bg_localloss.py for an example of how to train FSL.
bash demo.sh
Part of the subsampling learning network are adapted from LOUPE and SeqMRI. Part of the reconstruction network structures are adapted from MD-Recon-Net.
- LOUPE: https://github.com/cagladbahadir/LOUPE.
- SeqMRI: https://github.com/tianweiy/SeqMRI.
- MD-Recon-Net: https://github.com/Deep-Imaging-Group/MD-Recon-Net.
Thanks a lot for their great works!
If you have any questions, please feel free to contact Wang Zhiwen {wangzhiwen_scu@163.com}.
If you find this project useful, please consider citing:
@article{wang2024promoting,
title={Promoting fast MR imaging pipeline by full-stack AI},
author={Wang, Zhiwen and Li, Bowen and Yu, Hui and Zhang, Zhongzhou and Ran, Maosong and Xia, Wenjun and Yang, Ziyuan and Lu, Jingfeng and Chen, Hu and Zhou, Jiliu and others},
journal={Iscience},
volume={27},
number={1},
year={2024},
publisher={Elsevier}
}