Skip to content

To train deep convolutional neural networks, the input data and the activations need to be kept in memory. Given the limited memory available in current GPUs, this limits the maximum dimensions of the input data. Here we demonstrate a method to train convolutional neural networks while holding only parts of the image in memory.

License

Notifications You must be signed in to change notification settings

DIAGNijmegen/StreamingCNN

Repository files navigation

StreamingCNN

Please visit this repository for a complete training pipeline using this method: https://github.com/DIAGNijmegen/pathology-streaming-pipeline

This repository is an example implementation of StreamingCNN as published here (please cite when using this work):

Abstract

Due to memory constraints on current hardware, most convolution neural networks (CNN) are trained on sub-megapixel images. For example, most popular datasets in computer vision contain images much less than a megapixel in size (0.09MP for ImageNet and 0.001MP for CIFAR-10). In some domains such as medical imaging, multi-megapixel images are needed to identify the presence of disease accurately. We propose a novel method to directly train convolutional neural networks using any input image size end-to-end. This method exploits the locality of most operations in modern convolutional neural networks by performing the forward and backward pass on smaller tiles of the image. In this work, we show a proof of concept using images of up to 66-megapixels (8192x8192), saving approximately 50GB of memory per image. Using two public challenge datasets, we demonstrate that CNNs can learn to extract relevant information from these large images and benefit from increasing resolution. We improved the area under the receiver-operating characteristic curve from 0.580 (4MP) to 0.706 (66MP) for metastasis detection in breast cancer (CAMELYON17). We also obtained a Spearman correlation metric approaching state-of-the-art performance on the TUPAC16 dataset, from 0.485 (1MP) to 0.570 (16MP).

See this notebook for a numerical comparison between streaming and conventional backpropagation.

See Imagenette example for an example comparing losses between streaming and conventional training.

Pseudocode example

sCNN = StreamingCNN(stream_layers, tile_shape=(1, 3, 600, 600))
str_output = sCNN.forward(image)

final_output = final_layers(str_output)
loss = criterion(final_output, labels)

loss.backward()
sCNN.backward(image, str_output.grad)

Requirements

  • PyTorch 1.0+
  • tqdm
  • numpy

Model compatibility

Should work with all layers that keep local properties of a CNN intact. As such, batch / instance normalization are not supported in the streaming part of the network.

About

To train deep convolutional neural networks, the input data and the activations need to be kept in memory. Given the limited memory available in current GPUs, this limits the maximum dimensions of the input data. Here we demonstrate a method to train convolutional neural networks while holding only parts of the image in memory.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published