Skip to content

3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation

License

Notifications You must be signed in to change notification settings

pengwuke/Keras-Brats-Improved-Unet3d

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

3D U-Net Convolution Neural Network with Keras

Tumor Segmentation Example

Reference

web:http://learncv.cn/archives/category/projects/3d-u-net

Background

Originally designed after this paper on volumetric segmentation with a 3D U-Net. The code was written to be trained using the BRATS data set for brain tumors, but it can be easily modified to be used in other 3D applications.

Tutorial using BRATS Data and Python 3

Training

  1. Download the BRATS 2018 data by following the steps outlined on the BRATS 2018 competition page or Baidu yun. Password:nbs3. Place the unzipped folders in the brats/data/original folder.
  2. Install dependencies:
nibabel,
keras,
pytables,
nilearn,
SimpleITK,
nipype

(nipype is required for preprocessing only)

  1. Install ANTs N4BiasFieldCorrection and add the location of the ANTs binaries to the PATH environmental variable.

  2. Add the repository directory to the PYTONPATH system variable:

$ export PYTHONPATH=${PWD}:$PYTHONPATH
#ANTs
具体编译和环境变量设置
参考:https://zhuanlan.zhihu.com/p/43932439

(代替步骤3和4的方法)

解决方案:安装ANTs软件,建议源码安装。这里是一个编译完成的库,直接放到/usr/bin/下。
地址:https://sourceforge.net/projects/advants/ 在linux系统下打开下载的才是适合linux系统的版本,也可以用下面的代替地址
代替地址:https://github.com/MLearing/ANTs-1.9.x-Linux
  1. Convert the data to nifti format and perform image wise normalization and correction:

cd into the brats subdirectory:

$ cd brats

Import the conversion function and run the preprocessing:

$ python
>>> from preprocess import convert_brats_data
>>> convert_brats_data("data/original", "data/preprocessed")
  1. Run the training:

To run training using the original UNet model:

$ python train.py

To run training using an improved UNet model (recommended):

$ python train_isensee2017.py

If you run out of memory during training: try setting config['patch_shape`] = (64, 64, 64) for starters. Also, read the "Configuration" notes at the bottom of this page.

note

该程序是针对nvidia 1080以上GPU的,如果显卡低于上述配置,可能出现Process finished with exit code 139(interruped by signal 11:SIGSEGV)问题
原因:电脑配置太low,数据存储格式是h5格式的,训练到epoch的最后一次获取验证数据的时候,读取h5数据太慢,而训练的速度太快导致获取不到数据,即验证数组为空,出现段错误。
解决办法:进入get_training_and_validation_generator这个函数之前,把h5中的数据直接提取出来换成数组,然后在向前面函数传参。

Write prediction images from the validation data

In the training above, part of the data was held out for validation purposes. To write the predicted label maps to file:

$ python predict.py

The predictions will be written in the prediction folder along with the input data and ground truth labels for comparison.

Results from patch-wise training using original UNet

Patchwise training loss graph Patchwise boxplot scores

In the box plot above, the 'whole tumor' area is any labeled area. The 'tumor core' area corresponds to the combination of labels 1 and 4. The 'enhancing tumor' area corresponds to the 4 label. This is how the BRATS competition is scored. The both the loss graph and the box plot were created by running the evaluate.py script in the 'brats' folder after training has been completed.

Results from Isensee et al. 2017 model

I also trained a model with the architecture as described in the 2017 BRATS proceedings on page 100. This architecture employs a number of changes to the basic UNet including an equally weighted dice coefficient, residual weights, and deep supervision. This network was trained using the whole images rather than patches. As the results below show, this network performed much better than the original UNet.

Isensee training loss graph Isensee boxplot scores

Configuration

Changing the configuration dictionary in the train.py or the train_isensee2017.py scripts, makes it easy to test out different model and training configurations. I would recommend trying out the Isensee et al. model first and then modifying the parameters until you have satisfactory results. If you are running out of memory, try training using (64, 64, 64) shaped patches. Reducing the "batch_size" and "validation_batch_size" parameters will also reduce the amount of memory required for training as smaller batch sizes feed smaller chunks of data to the CNN. If the batch size is reduced down to 1 and it still you are still running out of memory, you could also try changing the patch size to (32, 32, 32). Keep in mind, though, that a smaller patch sizes may not perform as well as larger patch sizes.

Using this code on other 3D datasets

If you want to train a 3D UNet on a different set of data, you can copy either the train.py or the train_isensee2017.py scripts and modify them to read in your data rather than the preprocessed BRATS data that they are currently setup to train on.

Pre-trained Models

The following Keras models were trained on the BRATS 2017 data:

Citations

GBM Data Citation:

  • Spyridon Bakas, Hamed Akbari, Aristeidis Sotiras, Michel Bilello, Martin Rozycki, Justin Kirby, John Freymann, Keyvan Farahani, and Christos Davatzikos. (2017) Segmentation Labels and Radiomic Features for the Pre-operative Scans of the TCGA-GBM collection. The Cancer Imaging Archive. https://doi.org/10.7937/K9/TCIA.2017.KLXWJJ1Q

LGG Data Citation:

  • Spyridon Bakas, Hamed Akbari, Aristeidis Sotiras, Michel Bilello, Martin Rozycki, Justin Kirby, John Freymann, Keyvan Farahani, and Christos Davatzikos. (2017) Segmentation Labels and Radiomic Features for the Pre-operative Scans of the TCGA-LGG collection. The Cancer Imaging Archive. https://doi.org/10.7937/K9/TCIA.2017.GJQ7R0EF

About

3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%