In this project, we have compared Liver Tumor segmentation accuracies of four different architectures- UNet, ResUNet, SegResNet, & UNETR, over 2017 LiTS dataset. To evaluate the architectures' performances we used DICE score.
The dataset is available for download on https://drive.google.com/drive/folders/13gtsM4-iFiBd_8cMKvIO7Q73d-YcdB0H?usp=share_link . Place this dataset in the "data" following the instructions given in 'data_preparation.ipynb'. Following the data pre=processing steps there, you'll get the following structure:
data/task_data/TrainVolumes_full->
----images->
----------volume-0.nii
----------....
----------volume-104.nii
data/task_data/TrainLabels_full->
----------segmentation-0.nii
----------....
----------segmentation-104.nii
data/task_data/TestVolumes_full->
----images->
----------volume-105.nii
----------....
----------volume-130.nii
data/task_data/TestLabels_full->
----------segmentation-105.nii
----------....
----------segmentation-130.nii
First clone this repository:
git clone https://github.com/mushroonhead/eecs545_medImageSeg.git
Then install the needed dependencies:
cd eecs545_medImageSeg
pip install -e '.[skimage]'
To train the four architectures,
- Training UNet, ResUNet, SegResNet in ONE-HOT encoding mode (1 label per voxel):
python train_one_hot.py # the network model can be passed as an argument
- Training UNet, ResUNet, SegResNet in multi-class label mode (tumorous liver has tumor and liver tag):
python train_two_class.py # the network model can be passed as an argument
- For training UNETR: Just launch the the notebook "UNETR_LiTS_segmentation_3d.ipynb". This notebook can also be used to visualize the segmentation results for all the four achitectures.