Skip to content

VMambaMorph: a Multi-Modality Deformable Image Registration Framework based on Visual State Space Model with Cross-Scan Module

License

Notifications You must be signed in to change notification settings

ziyangwang007/VMambaMorph

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VMambaMorph:
a Multi-Modality Deformable Image Registration Framework based on Visual State Space Model with Cross-Scan Module

arXiv

The second version of ArXiv is processing!!!

This repo provides an implementation of the training and inference pipeline for VMambamorph.

Graphical Abstract

Contents

Requirements

pip install mamba-ssm pip install voxelmorph

And some other basic python library: SimpleITK, torchdiffeq, timm, flash_attn, ml_collections, fvcore, py.

Linux NVIDIA GPU PyTorch 1.12+ CUDA 11.6+

Usage

  1. Clone the repo:
git clone https://github.com/ziyangwang007/VMambaMorph.git 
cd VMambaMorph
  1. Dataset: Download the SR-Reg dataset Official Page. (Please be aware the input size is 128x128x128 in the VMambaMorph project, due to memory cost.)

  2. Train VoxelMorph (With or Without Feature Extractor)

python ./scripts/torch/train_cross.py --gpu 0 --epochs 300 --batch-size 1 --model-dir output/train_debug_vm --model vm
python ./scripts/torch/train_cross.py --gpu 0 --epochs 300 --batch-size 1 --model-dir output/train_debug_vmfeat --model vm-feat
  1. Train TransMorph (With or Without Feature Extractor)
python ./scripts/torch/train_cross.py --gpu 0 --epochs 300 --batch-size 1 --model-dir output/train_debug_tm --model tm
python ./scripts/torch/train_cross.py --gpu 0 --epochs 300 --batch-size 1 --model-dir output/train_debug_tmfeat --model tm-feat
  1. Train MambaMorph (With or Without Feature Extractor)
python ./scripts/torch/train_cross.py --gpu 0 --epochs 300 --batch-size 1 --model-dir output/train_debug_mm --model mm
python ./scripts/torch/train_cross.py --gpu 0 --epochs 300 --batch-size 1 --model-dir output/train_debug_mmfeat --model mm-feat
  1. Train VMambaMorph (With or Without Feature Extractor)
python ./scripts/torch/train_cross.py --gpu 0 --epochs 300 --batch-size 1 --model-dir output/train_debug_vm --model vimm
python ./scripts/torch/train_cross.py --gpu 0 --epochs 300 --batch-size 1 --model-dir output/train_debug_vmfeat --model vimm-feat
  1. Test
python ./scripts/torch/test_cross.py --gpu 0 --model XXX --load-model "Your Path/output/train_debug_xxx/min_train.pt"

Reference

  • Ziyang Wang, et al. "VMambaMorph: a Visual Mamba-based Framework with Cross-Scan Module for Deformable 3D Image Registration." arXiv preprint arXiv:2404.05105 (2024).
@article{wang2024vmambamorph,
  title={VMambaMorph: a Multi-Modality Deformable Image Registration Framework based on Visual State Space Model with Cross-Scan Module},
  author={Wang, Ziyang and Zheng, Jianqing and Ma, Chao and Guo, Tao},
  journal={arXiv preprint arXiv:2402.05105},
  year={2024}
}
  • Wang, Ziyang, et al. "Mamba-unet: Unet-like pure visual mamba for medical image segmentation." arXiv preprint arXiv:2402.05079 (2024).
@article{wang2024mamba,
  title={Mamba-unet: Unet-like pure visual mamba for medical image segmentation},
  author={Wang, Ziyang and Zheng, Jian-Qing and Zhang, Yichi and Cui, Ge and Li, Lei},
  journal={arXiv preprint arXiv:2402.05079},
  year={2024}
}

and if applicable, the version of MambaMorph:

@article{guo2024mambamorph,
  title={Mambamorph: a mamba-based backbone with contrastive feature learning for deformable mr-ct registration},
  author={Guo, Tao and Wang, Yinuo and Meng, Cai},
  journal={arXiv preprint arXiv:2401.13934},
  year={2024}
}

Contact

ziyang [dot] wang17 [at] gmail [dot] com

Acknowledgement

Mamba Link, Mambamorph Link, VMamba Link, TransMorph Link.

About

VMambaMorph: a Multi-Modality Deformable Image Registration Framework based on Visual State Space Model with Cross-Scan Module

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages