LGMSNet is a lightweight framework for 2D and 3D medical image segmentation, implemented in PyTorch. It provides a complete pipeline for training, validation, and testing, with support for multiple datasets.
LGMSNet: Thinning a medical image segmentation model via dual-level multiscale fusion accepted by ECAI 2025 oral.
SegmsNet/
├── checkpoint/ # Saved model weights
├── data/ # Dataset directory
├── dataloader/ # Data loader modules
├── network/ # 2D segmentation models
├── network_3d/ # 3D segmentation models
├── src/ # Source code
├── utils/ # Utility functions and scripts
├── datamodule_my.py # Custom data module
├── environment.yaml # Environment configuration file
├── main_kvasir.py # Main script for Kvasir dataset
├── main.py # Main script for 2D segmentation
├── main3d.py # Main script for 3D segmentation
├── train.sh # Training script
├── trainer.py # Trainer module
└── README.md # Project documentation
Ensure conda is installed, then create the environment using:
conda env create -f environment.yaml
conda activate uxnet3d Place datasets in the data/ directory. Supported datasets include:
- Kvasir
- BUSI
- TNSCUI
- ISIC18
- BTCV
- KiTS23
Data can be found in U-Bench
Run the train.sh script, which includes training configurations for all supported datasets.
bash train.sh