Official repository for: Swin-UMamba: Mamba-based UNet with ImageNet-based pretraining
- AbdomenMRI
- Endoscopy
- Microscopy
Step-1: Create a new conda environment & install requirements
conda create -n swin_umamba python=3.10
conda activate swin_umamba
pip install torch==2.0.1 torchvision==0.15.2
pip install causal-conv1d==1.1.1
pip install mamba-ssm
pip install torchinfo timm numba
Step-2: Install Swin-UMamba
git clone https://github.com/JiarunLiu/Swin-UMamba
cd Swin-UMamba/swin_umamba
pip install -e .
Dataset:
We use the same data & processing strategy following U-Mamba. Download dataset from U-Mamba and put them into the data folder. Then preprocess the dataset with following command:
nnUNetv2_plan_and_preprocess -d DATASET_ID --verify_dataset_integrity
ImageNet pretrained model:
We use the ImageNet pretrained VMamba-Tiny model from VMamba. You need to download the model checkpoint and put it into data/pretrained/vmamba/vmamba_tiny_e292.pth
wget https://github.com/MzeroMiko/VMamba/releases/download/%2320240218/vssmtiny_dp01_ckpt_epoch_292.pth
mv vssmtiny_dp01_ckpt_epoch_292.pth data/pretrained/vmamba/vmamba_tiny_e292.pth
Using the following command to train & evaluate Swin-UMamba
# AbdomenMR dataset
bash scripts/train_AbdomenMR.sh MODEL_NAME
# Endoscopy dataset
bash scripts/train_Endoscopy.sh MODEL_NAME
# Microscopy dataset
bash scripts/train_Microscopy.sh MODEL_NAME
Here MODEL_NAME
can be:
nnUNetTrainerSwinUMamba
: Swin-UMamba model with ImageNet pretrainingnnUNetTrainerSwinUMambaD
: Swin-UMamba$\dagger$ model with ImageNet pretrainingnnUNetTrainerSwinUMambaScratch
: Swin-UMamba model without ImageNet pretrainingnnUNetTrainerSwinUMambaDScratch
: Swin-UMamba$\dagger$ model without ImageNet pretraining
You can download our model checkpoints here.
We thank the authors of nnU-Net, Mamba, UMamba, VMamba, and Swin-Unet for making their valuable code & data publicly available.
@article{Swin-UMamba,
title={Swin-UMamba: Mamba-based UNet with ImageNet-based pretraining},
author={Jiarun Liu and Hao Yang and Hong-Yu Zhou and Yan Xi and Lequan Yu and Yizhou Yu and Yong Liang and Guangming Shi and Shaoting Zhang and Hairong Zheng and Shanshan Wang},
journal={arXiv preprint arXiv:2402.03302},
year={2024}
}