The Adipocyte U-net is a deep U-net architecture trained to segment Adipocytes from histology imaging slides (both H&E and florescent).
We strongly recommend following these instructions using python 3.5+:
- Install
virtualenv
and create a virtual environmentvirtualenv unet
- source the environment
source unet/bin/activate
git clone https://github.com/GlastonburyC/Adipocyte-U-net.git
cd Adipocyte-U-net
- Install the requirements
pip install -r requirements.txt
- If some installs fail, it maybe the version of OS X you're using, in that case
export MACOSX_DEPLOYMENT_TARGET=10.14
and reinstall the requirements (step 5).
We have provided some small example tutorials to demonstrate both the classifier (InceptionV3) + Adipocyte U-net in action. It is neccessary to present only a handful of images, as the data we analysed in the paper poses its own engineering challenges (millions of images, several terabytes of data). However, the full data release is included at the bottom of the README, along with all network weights, images and manual annotations.
if you run the tutorials without a GPU, it will be slow (but still < 1min)
An example script is included that classifies 30 cells as either containing adipocytes, not_adipocytes or empty tiles.
First download the neccessary files.
It can be run like so - make sure you have the network weights downloaded, they can be found here checkpoints/tile_classifier_InceptionV3/url_to_tiles:
python3 cell_classifier.py --out-dir ./ \
--weight_dir checkpoints/tile_classifier_InceptionV3/tile_adipocyte.weights.h5 \
--image-path example_class_tiles \
--n_cpu 4 \
This outputs a text file of probabilities of whether the network thinks the image contains adipocytes. After this step, we can keep only adipocyte tiles (given some probability threshold) and train our adipocyte U-net. Our classifier deduces 10/30 images contain adipocytes(P>0.90). We use these images downstream to perform segmentation and cell area estimate in the next tutorial.
We have made this repository work with Binder. By clicking this binder logo
, a docker image will launch on the Binder website and you'll be able to use the tutorial notebook Tutorial.ipynb
as if it were installed on your own laptop.
This tutorial, Tutorial.ipynb
, is a walk through example of how to use adipocyte U-net to perform image segmentation. In the tutorial we predict segmentations and use these predictions to obtain surface area estimates of the cell population present in the image.
This notebook will work on either a CPU or GPU, but will be many times faster in a GPU environment.
All the data to reproduce the manuscript are available below:
- training images for classifying adipocyte containing tiles here
- trained InceptionV3 adipocyte tile classifier weights here.
- U-net weights are in /checkpoints/ folder.
- All annotations, training and validation images splits here
- All montage images and numpy arrays here
If you are predicting on your own Adipocyte images and they significantly deviate from the H&E images used here, consider fine-tuning the Adipocyte U-net, otherwise, similar to the tutorial notebook, you can use the trained weights provided here.