Skip to content

Making a GAN to improve the z-resolution of SMLM images

Notifications You must be signed in to change notification settings

squishram/zres_GAN

Repository files navigation

AI For Forging Isotropic Resolution for 3D Micrographs (AFFIRM3D)

User Section

What is this?

  • AFFIRM3D is a generative adversarial network (GAN) for improving the z-resolution of a 3D microscopy image until it matches its x/y-resolution
  • In theory, AFFIRM3D will be suitable for use on all kinds of anisotropic images, including SMLM and confocal images
  • It will be trained using both simulated and experimental data

FAQ

AIs for use in micrograph augmentation are typically biased by the training process to search for particular structures. How does AFFIRM3D overcome this?

  • AFFIRM3D's loss function is heavily weighted in favour of what we call 'The Fourier Loss'
  • The Fourier Loss is calculated as the difference between the (high-pass filtered) fourier-transformed z-projection (the 'z-spectrum') and its xy counterparts
  • When training on experimental data, The Fourier Loss is the only loss used to train the generator network

Surely there needs to be some sensitivity to the actual locations of the structures in the training process?

  • The network undergoes a 'pre-training' process using simulated data, where the actual pixel values of the ground truth are also used to obtain the loss for training the network
  • Importantly, The Fourier Loss is still much more heavily weighted than the signal space loss

Developer Section

TODO

preliminary: seeing the effect of the microtubule density

  • simulate high-density microtubules
  • simulate low-density microtubules
  • test on cubic (non-undersampled) data, see what difference it makes!
    • low-density microtubules didn't really work

1: training on data with undersampled z

  • simulate undersampled

  • interpolate z-spectrum so it can be taken away from the xy-spectrum

  • fix <0 interpolated values to 0 to get rid of NaNs

  • offset pixel base values in image by a set value to get rid of this - try an increase of +100 to pixel values

  • try a monotonic cubic interpolator because cubic spline ones tend to overshoot and get negative values

  • monotonic cubic interpolators seem to miss the, ah, finer details of the spectrum - maybe try OVER-sampling it, and then downsampling it back down to the level of the others?

  • OBSOLETE IDEA upsampling the power spectra is a dead-end - instead, we should be upsampling the z-axis of the data itself, since in many real experimental cases it will be sub-Nyquist

  • OBSOLETE IDEA using scipy to achieve this doesn't work (it's way too slow) because you need to take stuff off the GPU to run it through the numpy backend - instead, we need to use a pytorch-based method

  • [-] OBSOLETE IDEA this has been successfully done with the undersampled data - now we need to make it so the ground truth is better sampled at the point of data generation

  • SUSAN MEETING:

    • METHOD 1: no pre-training, no ground truth
    • [-] METHOD 2: pre-training with ground truth
    • do sketches for both!
    • background reading on 'patch-GAN'
  • make a dataset without the ground truth

  • remove the ground truth from the training loop

  • implement susan's convolutional layer to downsample the generated image

  • check that you get increased blur level

  • calculate the real space loss between the downsampled generated image and the input

  • sanity check experiment: make the z-resolution MUCH worse in the input image - maybe 5x worse?

  • sanity check experiment: drop down the density of the microtubules a little bit

  • play with the windowing functions

  • more L1 loss tends to result in blurriness - maybe bump up the 'GAN'-loss and see what that does?

  • L110000, GAN1000, Fourier*1 = best so far

  • try more epochs - then try making a larger dataset

YOU ARE HERE

To Do: in simulating data

1. make isotropic + isotropically sampled version of some data
2. make anisotropic + anisotropically sampled version of the same data
3. check the 1d gaussian kernel works to turn 1. into 2.
  • sanity check: simulate the same data, lowres and highres versions, then see if you can convolve the hr one down to match the lr one

    • does the sampling ratio and the resolution ratio have to be the same? Surely this is a kind of bias? need to fix
  • generate a CUBIC ground truth as well when you make your dataset - that way you'll have something to compare to the Gen(Upsampled(Input))

    • generate a test dataset and import that into the program as the sample image instead of picking out a random image from the training dataset
  • try graphing the unfiltered projections as well as the filtered ones

  • try random adding vertical/ horizontal/ axial flips/ rotations to the dataset on each epoch to avoid biasing without having to use enormous datasets

  • do PROPER normalisation - get the mean and stdev values of the image and use those to normalise the image before training! - is this okay for microscopy images where intensity needs to be preserved?

  • adapt the adversary/discriminator to use 'patch-GAN'

  • try reducing mitochondrial "movement" in z for the mitochondria simulations

  • pass some of my data through it and see what happens

2: getting some appropriate data

  • simulate microtubules (undersampled z, with noise)
  • simulate mitochondria (or any other 2D structure) (undersampled z, with noise)
  • get my own 3D experimental data
  • Jonas Ries data
  • Double-Helix data
  • Biplane data
  • OpenCell data (Chan/Zucc database)

3: saving and retrieving models

  • figure out how to do this (simple syntax)
  • figure out a training pattern
    1. train on simulated data, save model
    2. train on experimental data, save model
    3. test on simulated and experimental data
  • try it!

Overview

  • training on simulated data seems like the best idea
  • varying the size of the z-psf across simulated data
  • but testing/verifying on real data is essential ofc
  • remember to have a test loop
  • IMPORTANT and remember to turn of gradient calculations when you do it!

About

Making a GAN to improve the z-resolution of SMLM images

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages