Skip to content

Latest commit

 

History

History
 
 

challenge_baseline

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

A U-Net model for lung lesion segmentation from CT images

Challenge website | Leaderboard

This directory contains a simple baseline method using MONAI for training, validation, and inference for COVID-19 LUNG CT LESION SEGMENTATION CHALLENGE - 2020 (a MICCAI Endorsed Event). The implementation is a basic deep learning pipeline which could serve as a starting point for further algorithmic improvements.

lung-ct lung

Overview

  1. Requirements
  2. Dependencies and installation
  3. Usages
    1. Training
    2. Inference
    3. Results
    4. Further readings
  4. Submitting to the leaderboard
  5. License

Requirements

The script is tested with:

  • Ubuntu 20.04 | Python 3.8 | CUDA 11.7

On a GPU with automatic mixed precision support:

  • the default training pipeline requires about 5.5GB memory,
  • the default inference pipeline requires about 2.3GB memory.

Dependencies and installation

Pytorch

Please follow the Pytorch instructions for installation. To verify that you have a viable installation, please run:

python -c 'import torch; print(torch.rand(4, 2, device="cuda"))'

MONAI

pip install "git+https://github.com/Project-MONAI/MONAI#egg=monai[nibabel,ignite,tqdm]"

For more information please check out the installation guide.

Usages

Download run_net.py to a local folder. The following instructions assume that the challenge data are also downloaded and unzipped into the same folder.

Training (and validation at every epoch)

python run_net.py train --data_folder "COVID-19-20_v2/Train" --model_folder "runs"

During training, the top three models will be selected based on the per-epoch validation and stored at --model_folders.

The training uses convenient file loading modules and a few intensity and spatial random augmentations using MONAI:

  • LoadImaged, Orientationd, Spacingd, ScaleIntensityRanged

Load the image data into the LPS orientation (Left to right, Posterior to anterior, Superior to inferior), with a resolution of 1.25mm x 1.25mm x 5.00mm, and intensity between [-1000.0, 500.0] scaled to [0.0, 1.0].

  • SpatialPadd

Pad the volumes to have at least 192x192 voxels in its first two spatial dimensions.

  • RandAffined, RandGaussianNoised, RandFlipd

Data augmentations randomized at every training iteration.

  • RandCropByPosNegLabeld

Randomly sample image/label pairs of shape (192, 192, 16) with balanced samples from the foreground (lesion) and background (anywhere else in the volume).

  • U-Net model

the UNet model is from monai.networks.nets.BasicUNet.

  • Sliding window inference

Segmentations are generated by monai.inferers.SlidingWindowInferer with a window size of (192, 192, 16). Note that the evaluation scores from this inference pipeline is not computed at the original data resolution, and is presented here for model selection purpose only.

Inference

python run_net.py infer --data_folder "COVID-19-20_v2/Validation" --model_folder "runs"

This command will load the best validation model, run inference, and store the predictions at ./output.

Results

training curves

This baseline method achieves 0.6904 ± 0.1801 Dice score on the challenge validation set.

Further readings

Submitting to the leaderboard

By default, the predictions generated at ./output/to_submit are ready for submission. Please zip the folder and follow the upload instructions to submit it.

For any queries, please contact the challenge organizers.

License

The code is licensed under the Apache License 2.0.