Skip to content

Commit

Permalink
Add inference example for bundle (#604)
Browse files Browse the repository at this point in the history
* [DLMED] add inference example

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] update based on latest design

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] update config content

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] update to latest design

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] update according to comments

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] enhance the expression

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] add more transforms

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] adjust config

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] add checkpoint logic

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] add checkpoint test

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] update imports

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] update for _requires_

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] add logging

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] fix typo

Signed-off-by: Nic Ma <nma@nvidia.com>

* Update README.md

* Update metadata.json

* [DLMED] add hugging face download

Signed-off-by: Nic Ma <nma@nvidia.com>

Co-authored-by: Wenqi Li <831580+wyli@users.noreply.github.com>
  • Loading branch information
Nic-Ma and wyli authored Mar 25, 2022
1 parent ef59c37 commit ac88fee
Show file tree
Hide file tree
Showing 5 changed files with 299 additions and 0 deletions.
147 changes: 147 additions & 0 deletions modules/bundles/spleen_segmentation/configs/inference.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
{
"imports": [
"$import glob",
"$import os"
],
"cudnn_opt": "$setattr(torch.backends.cudnn, 'benchmark', True)",
"dataset_dir": "/workspace/data/Task09_Spleen",
"ckpt_path": "/workspace/data/tutorials/modules/bundles/spleen_segmentation/models/model.pt",
"download_ckpt": "$monai.apps.utils.download_url('https://huggingface.co/MONAI/example_spleen_segmentation/resolve/main/model.pt', @ckpt_path)",
"device": "$torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')",
"datalist": "$list(sorted(glob.glob(@dataset_dir + '/imagesTs/*.nii.gz')))",
"network_def": {
"_target_": "UNet",
"spatial_dims": 3,
"in_channels": 1,
"out_channels": 2,
"channels": [
16,
32,
64,
128,
256
],
"strides": [
2,
2,
2,
2
],
"num_res_units": 2,
"norm": "batch"
},
"network": "$@network_def.to(@device)",
"preprocessing": {
"_target_": "Compose",
"transforms": [
{
"_target_": "LoadImaged",
"keys": "image"
},
{
"_target_": "EnsureChannelFirstd",
"keys": "image"
},
{
"_target_": "Orientationd",
"keys": "image",
"axcodes": "RAS"
},
{
"_target_": "Spacingd",
"keys": "image",
"pixdim": [1.5, 1.5, 2.0],
"mode": "bilinear"
},
{
"_target_": "ScaleIntensityRanged",
"keys": "image",
"a_min": -57,
"a_max": 164,
"b_min": 0,
"b_max": 1,
"clip": true
},
{
"_target_": "EnsureTyped",
"keys": "image"
}
]
},
"dataset": {
"_target_": "Dataset",
"data": "$[{'image': i} for i in @datalist]",
"transform": "@preprocessing"
},
"dataloader": {
"_target_": "DataLoader",
"dataset": "@dataset",
"batch_size": 1,
"shuffle": false,
"num_workers": 4
},
"inferer": {
"_target_": "SlidingWindowInferer",
"roi_size": [
96,
96,
96
],
"sw_batch_size": 4,
"overlap": 0.5
},
"postprocessing": {
"_target_": "Compose",
"transforms": [
{
"_target_": "Activationsd",
"keys": "pred",
"softmax": true
},
{
"_target_": "Invertd",
"keys": "pred",
"transform": "@preprocessing",
"orig_keys": "image",
"meta_key_postfix": "meta_dict",
"nearest_interp": false,
"to_tensor": true
},
{
"_target_": "AsDiscreted",
"keys": "pred",
"argmax": true
},
{
"_target_": "SaveImaged",
"keys": "pred",
"meta_keys": "pred_meta_dict",
"output_dir": "eval"
}
]
},
"handlers": [
{
"_target_": "CheckpointLoader",
"_requires_": "@download_ckpt",
"_disabled_": "$not os.path.exists(@ckpt_path)",
"load_path": "@ckpt_path",
"load_dict": {"model": "@network"}
},
{
"_target_": "StatsHandler",
"iteration_log": false
}
],
"evaluator": {
"_target_": "SupervisedEvaluator",
"_requires_": "@cudnn_opt",
"device": "@device",
"val_data_loader": "@dataloader",
"network": "@network",
"inferer": "@inferer",
"postprocessing": "@postprocessing",
"val_handlers": "@handlers",
"amp": false
}
}
21 changes: 21 additions & 0 deletions modules/bundles/spleen_segmentation/configs/logging.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
[loggers]
keys=root

[handlers]
keys=consoleHandler

[formatters]
keys=fullFormatter

[logger_root]
level=INFO
handlers=consoleHandler

[handler_consoleHandler]
class=StreamHandler
level=INFO
formatter=fullFormatter
args=(sys.stdout,)

[formatter_fullFormatter]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
76 changes: 76 additions & 0 deletions modules/bundles/spleen_segmentation/configs/metadata.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
{
"schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_202203171008.json",
"version": "0.1.0",
"changelog": {
"0.1.0": "complete the model package",
"0.0.1": "initialize the model package structure"
},
"monai_version": "0.8.0",
"pytorch_version": "1.10.0",
"numpy_version": "1.21.2",
"optional_packages_version": {
"nibabel": "3.2.1"
},
"task": "Decathlon spleen segmentation",
"description": "A pre-trained model for volumetric (3D) segmentation of the spleen from CT image",
"authors": "MONAI team",
"copyright": "Copyright (c) MONAI Consortium",
"data_source": "Task09_Spleen.tar from http://medicaldecathlon.com/",
"data_type": "dicom",
"image_classes": "single channel data, intensity scaled to [0, 1]",
"label_classes": "single channel data, 1 is spleen, 0 is everything else",
"pred_classes": "2 channels OneHot data, channel 1 is spleen, channel 0 is background",
"eval_metrics": {
"mean_dice": 0.96
},
"intended_use": "This is an example, not to be used for diagnostic purposes",
"references": [
"Xia, Yingda, et al. '3D Semi-Supervised Learning with Uncertainty-Aware Multi-View Co-Training. arXiv preprint arXiv:1811.12506 (2018). https://arxiv.org/abs/1811.12506.",
"Kerfoot E., Clough J., Oksuz I., Lee J., King A.P., Schnabel J.A. (2019) Left-Ventricle Quantification Using Residual U-Net. In: Pop M. et al. (eds) Statistical Atlases and Computational Models of the Heart. Atrial Segmentation and LV Quantification Challenges. STACOM 2018. Lecture Notes in Computer Science, vol 11395. Springer, Cham. https://doi.org/10.1007/978-3-030-12029-0_40"
],
"network_data_format": {
"inputs": {
"image": {
"type": "image",
"format": "magnitude",
"num_channels": 1,
"spatial_shape": [
160,
160,
160
],
"dtype": "float32",
"value_range": [
0,
1
],
"is_patch_data": false,
"channel_def": {
"0": "image"
}
}
},
"outputs": {
"pred": {
"type": "image",
"format": "segmentation",
"num_channels": 2,
"spatial_shape": [
160,
160,
160
],
"dtype": "float32",
"value_range": [
0,
1
],
"is_patch_data": false,
"channel_def": {
"0": "background",
"1": "spleen"
}
}
}
}
}
49 changes: 49 additions & 0 deletions modules/bundles/spleen_segmentation/docs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# Description
A pre-trained model for volumetric (3D) segmentation of the spleen from CT image.

# Model Overview
This model is trained using the runner-up [1] awarded pipeline of the "Medical Segmentation Decathlon Challenge 2018" using the UNet architecture [2] with 32 training images and 9 validation images.

## Data
The training dataset is Task09_Spleen.tar from http://medicaldecathlon.com/.

## Training configuration
The training was performed with at least 12GB-memory GPUs.

Actual Model Input: 96 x 96 x 96

## Input and output formats
Input: 1 channel CT image

Output: 2 channels: Label 1: spleen; Label 0: everything else

## Scores
This model achieves the following Dice score on the validation data (our own split from the training dataset):

Mean Dice = 0.96

## commands example
Execute inference:
```
python -m monai.bundle run evaluator --meta_file configs/metadata.json --config_file configs/inference.json --logging_file configs/logging.conf
```
Verify the metadata format:
```
python -m monai.bundle verify_metadata --meta_file configs/metadata.json --filepath eval/schema.json
```
Verify the data shape of network:
```
python -m monai.bundle verify_net_in_out network_def --meta_file configs/metadata.json --config_file configs/inference.json
```
Export checkpoint to TorchScript file:
```
python -m monai.bundle export network_def --filepath models/model.ts --ckpt_file models/model.pt --meta_file configs/metadata.json --config_file configs/inference.json
```

# Disclaimer
This is an example, not to be used for diagnostic purposes.

# References
[1] Xia, Yingda, et al. "3D Semi-Supervised Learning with Uncertainty-Aware Multi-View Co-Training." arXiv preprint arXiv:1811.12506 (2018). https://arxiv.org/abs/1811.12506.

[2] Kerfoot E., Clough J., Oksuz I., Lee J., King A.P., Schnabel J.A. (2019) Left-Ventricle Quantification Using Residual U-Net. In: Pop M. et al. (eds) Statistical Atlases and Computational Models of the Heart. Atrial Segmentation and LV Quantification Challenges. STACOM 2018. Lecture Notes in Computer Science, vol 11395. Springer, Cham. https://doi.org/10.1007/978-3-030-12029-0_40
6 changes: 6 additions & 0 deletions modules/bundles/spleen_segmentation/docs/license.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
Third Party Licenses
-----------------------------------------------------------------------

/*********************************************************************/
i. Medical Segmentation Decathlon
http://medicaldecathlon.com/

0 comments on commit ac88fee

Please sign in to comment.