Skip to content

RatLesNetv2 is convolutional neural network for rodent brain lesion segmentation.

License

Notifications You must be signed in to change notification settings

jmlipman/RatLesNetv2

Repository files navigation

RatLesNetv2

Repository of the paper RatLesNetv2: A Fully Convolutional Network for Rodent Brain Lesion Segmentation.

Architecture

Table of Contents

1. Introduction

RatLesNetv2 is a convolutional neural network implemented in Python+Pytorch to segment rodent brain lesions. The code of RatLesNetv2 is simplified to make it readable and accessible to a wide audience.

This implementation of RatLesNetv2 allows combining several models trained separately. This script will generate a prediction per model and a prediction combined with majority voting. Post-processing, i.e. removing small independently connected components (holes and islands) is also available.

1.1 Files

.
 ├─eval.py # Generate and quantify predictions. It requires a file with the trained parameters of RatLesNetv2
 ├─train.py # Trains a RatLesNetv2 model. It generates the file required by eval.py
 └─lib 
   ├─DataWrapper.py # Reads and parses the NIfTI file
   ├─RatLesNetv2.py # RatLesNetv2 model implemented in Pytorch
   ├─RatLesNetv2Blocks.py # Blocks of RatLesNetv2 (Bottleneck and ResNet)
   ├─losses.py # Cross Entropy + Dice Loss functions
   ├─metric.py # Metrics to quantify segmentations quality i.e. Dice coeff., Hausdorff distance, Islands
   └─utils.py # Other functions.

2. Installation and Requirements

2.1 Requirements

  • Python (preferably version 3). Programming language of RatLesNetv2.
  • PyTorch (preferably with GPU support). Deep Learning framework.
  • pip. Python package installer.
  • Virtual Enviroment (optional)

2.2 Installation

  1. Install all libraries from 2.1 Requirements

  2. Install dependencies with pip

pip install scipy, scikit-image, nibabel
  1. Download source code
git clone git@github.com:jmlipman/RatLesNetv2.git

2.3 Image format

  • RatLesNetv2 uses NiBabel library to open scans. The recommended (and tested) input files are compressed NIfTI files (i.e. scan.nii.gz). If you want to convert Bruker files to NIfTI files, you can use Bru2Nii.
  • Images and their corresponding labels must be in the same folder (check below the expected path structure)
  • Images must have the following 4 dimensions: Height x Width x Depth x Modality (our images were 256x256x18x1).
  • Labels will have values of 0 (background voxels) and 1 (lesion voxels) in 3 dimensions: Height x Width x Depth.

Example of path containing training data:

PATH
 └─Study 1
   └─24h (time-point)
     ├─32 (id of the scan)
     │ ├─scan.nii.gz (image)
     │ └─scan_lesion.nii.gz (label)
     └─35
       ├─scan.nii.gz
       └─scan_lesion.nii.gz
       ...

2.4 Setup

  • By default RatLesNetv2 expects NIfTI files with 1 single modality. You can change this number by editing the content of the variable modalities.
  • The name of the scans and the ground truth are expected to be the same across each acquisiton. In the path example described in 2.3. Image format the names are "scan.nii.gz" and "scan_lesion.nii.gz". You can change this name in lib/DataWrapper.py self.scanName (images) and self.labelName (labels).

3. Training and Evaluation

3.1 Training

Arguments within [brackets] are optional.

python train.py --input DIR --output DIR [--validation DIR --loadMemory -1 --gpu X]
# Example
python train.py --input ~/data/in/MRI_Training_Data --output ~/data/out/Trained_Models --validation ~/data/in/MRI_Validation_Data --loadMemory 1
  • --input: Path containing all the subfolders of the data used for training/optimizing the network. Check 2.3. Image format to see the expected path structure.
  • --output: Path or name of the folder where the output files will be saved (trainin_loss, validation_loss, RatLesNetv2.model).
  • --validation: Path containing all the subfolders of the data used for calculating the validation error. Check 2.3. Image format to see the expected path structure.
  • --loadMemory: If 1, it will load into memory (RAM) the training and validation files. This will speed up the training since the script will not constantly open and close the same files. However, if too much data is used it may not fit into the RAM. In our case, 24 MRI scans use around 2GB of memory.
  • --gpu: This will choose the GPU. Leaving this by default will make RatLesNetv2 use the default GPU (if any). It is highly recommended to use a GPU to train RatLesNetv2.

Files generated by train.py:

  • training_loss: it contains the loss calculated during the training.
  • validation_loss: it contains the loss calculated during the validation. This file is generated if --validation option is used.
  • RatLesNetv2.model: parameters of RatLesNetv2 after the optimization/training. This file is necessary to generate the predictions by the eval.py script.

3.2 Evaluation

Arguments within [brackets] are optional.

python eval.py --input DIR --output DIR --model FILE [--gpu X]
# Example (evaluating with 1 model)
python eval.py --input ~/data/in/MRI_Testing_Data --output ~/data/out/Segmentation_Results --model ~/data/out/Trained_Models/1/RatLesNetv2.model [--gpu X]
# Example (evaluating with multiple models)
python eval.py --input ~/data/in/MRI_Testing_Data --output ~/data/out/Segmentation_Results --model ~/data/out/Trained_Models/*/RatLesNetv2.model [--gpu X]
  • --input: Path containing all the subfolders of the scans that RatLesNetv2 will segment. Check 2.3. Image format to see the expected path structure.
  • --output: Path or name of the folder where the output files will be saved.
  • --model: Location of the parameters of RatLesNetv2 after the model was optimized. It is the file generated by train.py called RatLesNetv2.model.
  • --gpu: This will choose the GPU. Leaving this by default will make RatLesNetv2 use the default GPU (if any). It is highly recommended to use a GPU to train RatLesNetv2.

Files generated by eval.py:

  • A segmentation file per scan found in the --input
  • If the folders where the scans are located also contain the ground truth (following the same structure in 2.3. Image format ) a file called stats.csv with the Dice coefficient, Hausdorff distance and Number of Islands will be generated.

Enable/Disable Post-processing:

Post-processing is a final step that removes small "holes" and "islands" in the segmentations generated by the model (see Figure below). You can choose the threshold which determines that a hole/island is too small and it will be removed. By default this value is 20. In other words, clusters of voxels smaller than 20 will be removed.

# eval.py
removeSmallIslands_thr = 20

HolesIslands

4. License

MIT License

5. Citation

@article{10.3389/fnins.2020.610239,
author={Valverde, Juan Miguel and Shatillo, Artem and De Feo, Riccardo and Gröhn, Olli and Sierra, Alejandra and Tohka, Jussi},
title={RatLesNetv2: A Fully Convolutional Network for Rodent Brain Lesion Segmentation},
journal={Frontiers in Neuroscience},
volume={14},
pages={1333},
year={2020},
URL={https://www.frontiersin.org/article/10.3389/fnins.2020.610239},
DOI={10.3389/fnins.2020.610239},
ISSN={1662-453X},   
}

6. Contact

Feel free to write an email with questions or feedback about RatLesNetv2 at juanmiguel.valverde@uef.fi

About

RatLesNetv2 is convolutional neural network for rodent brain lesion segmentation.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages