Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Image labeling at whole-well level #104

Closed
tcompa opened this issue Jul 13, 2022 · 14 comments
Closed

Image labeling at whole-well level #104

tcompa opened this issue Jul 13, 2022 · 14 comments
Assignees

Comments

@tcompa
Copy link
Collaborator

tcompa commented Jul 13, 2022

As of #64 (comment), planned use cases for image labeling are

  1. Segmentation at level 0 by site (original FOVs) in 2D & 3D
  2. Segmentation for a full well, e.g. the full well in 2D (may still work at full resolution, i.e. pyramid level 0)
  3. Segmentation at user-selected levels for 2D or 3D. e.g. Segment sites at pyramid level 1 or segment the whole well at level 2.

Use case 1 is discussed in #64.
This issue concerns use case 2, namely the possibility to segment a whole 2D well.

@tcompa
Copy link
Collaborator Author

tcompa commented Jul 15, 2022

As of 873e854, we have a first naive implementation, where the mask is explicitly upscaled before being stored on disk (see fractal-analytics-platform/fractal-tasks-core#24 for a discussion). This allowed for a quick implementation and visualization of results.

See for instance this result for a 2x2 well segmented at level 1:

Screenshot from 2022-07-15 11-36-39

Next steps:

  1. Include the appropriate coordinateTransformations metadata, so that explicit upscaling is not needed for visualization (it will still be needed for processing, but probably done on-the-fly).
  2. Integrate this task in the Fractal ones, to run it through Fractal/parsl.

Notes

  • This task currently works on a single FOV, which corresponds to a well because we are always merging FOVs. In a more general case, a bit more of logic would be needed to load a whole well by stitching individual FOVs. Not needed, at the moment.
  • Upscaling is done naively as in https://stackoverflow.com/a/7525345/19085332. Note: this is only tested on level 1 of a 2x2 well, but it may not scale up to a 9x8 well. To be checked. Anyway, upscaling won't be part of this task (see point 1 below), so this discussion can be postponed to Create core feature measurements #69.

@jluethi
Copy link
Collaborator

jluethi commented Jul 15, 2022

This segmentation result looks great @tcompa ! 👏🏻

In a more general case, a bit more of logic would be needed to load a whole well by stitching individual FOVs. Not needed, at the moment.

With our current approach being wells always being saved as single FOVs, we don't need to go into this now. But worth considering down the road :)

Very much looking forward to seeing this run on whole wells through Fractal. I will be preparing additional test sets where we want to test 2D segmentation of wells on organoids (much larger structures), which should work by changing some of the cellpose parameters :)

@tcompa
Copy link
Collaborator Author

tcompa commented Jul 15, 2022

As a future reference, segmenting an array of shape (1, 9720, 10240) (corresponding to level 1 of a 2D 9x8 well) leads to a GPU-memory error within cellpose:

Traceback (most recent call last):
  File "../../image_labeling_whole_well.py", line 210, in <module>
    image_labeling_whole_well(
  File "../../image_labeling_whole_well.py", line 101, in image_labeling_whole_well
    mask, flows, styles, diams = model.eval(
  File "/data/homes/fractal/.conda/envs/fractal/lib/python3.8/site-packages/cellpose/models.py", line 227, in eval
    masks, flows, styles = self.cp.eval(x, 
  File "/data/homes/fractal/.conda/envs/fractal/lib/python3.8/site-packages/cellpose/models.py", line 536, in eval
    masks, styles, dP, cellprob, p = self._run_cp(x, 
  File "/data/homes/fractal/.conda/envs/fractal/lib/python3.8/site-packages/cellpose/models.py", line 636, in _run_cp
    outputs = dynamics.compute_masks(dP[:,i], cellprob[i], niter=niter, cellprob_threshold=cellprob_threshold,
  File "/data/homes/fractal/.conda/envs/fractal/lib/python3.8/site-packages/cellpose/dynamics.py", line 725, in compute_masks
    mask = remove_bad_flow_masks(mask, dP, threshold=flow_threshold, use_gpu=use_gpu, device=device)
  File "/data/homes/fractal/.conda/envs/fractal/lib/python3.8/site-packages/cellpose/dynamics.py", line 580, in remove_bad_flow_masks
    merrors, _ = metrics.flow_error(masks, flows, use_gpu, device)
  File "/data/homes/fractal/.conda/envs/fractal/lib/python3.8/site-packages/cellpose/metrics.py", line 282, in flow_error
    dP_masks = dynamics.masks_to_flows(maski, use_gpu=use_gpu, device=device)
  File "/data/homes/fractal/.conda/envs/fractal/lib/python3.8/site-packages/cellpose/dynamics.py", line 277, in masks_to_flows
    mu, mu_c = masks_to_flows_device(masks, device=device)
  File "/data/homes/fractal/.conda/envs/fractal/lib/python3.8/site-packages/cellpose/dynamics.py", line 152, in masks_to_flows_gpu
    mu = _extend_centers_gpu(neighbors, centers, isneighbor, Ly, Lx, 
  File "/data/homes/fractal/.conda/envs/fractal/lib/python3.8/site-packages/cellpose/dynamics.py", line 83, in _extend_centers_gpu
    grads = T[:, pt[[2,1,4,3],:,0], pt[[2,1,4,3],:,1]]
RuntimeError: CUDA out of memory. Tried to allocate 1.29 GiB (GPU 0; 14.76 GiB total capacity; 12.42 GiB already allocated; 561.44 MiB free; 13.29 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Now trying with level 2.

@tcompa
Copy link
Collaborator Author

tcompa commented Jul 15, 2022

Labeling a single Z plane of a 9x8 well at level 2 (corresponding to a x4 coarsening along X and Y) does not lead to memory errors:

Start image_labeling_whole_well task for /data/active/fractal/tests/Temporary_data_UZH_1_well_9x8_sites_singlefov/20200812-CardiomyocyteDifferentiation14-Cycle1_mip.zarr/B/03/0/
Total well shape/chunks:
(1, 4860, 5120)
((1,), (2160, 2160, 540), (2560, 2560))

START Cellpose | well: <class 'dask.array.core.Array'>, (1, 4860, 5120) 
END   Cellpose | Elapsed: 19.3520 seconds | mask shape: (1, 4860, 5120), mask dtype: uint16, max(mask): 8380

upscaled mask from (1, 4860, 5120) to (1, 19440, 20480)

da.from_array(upscaled_mask) [with rechunking]: dask.array<rechunk-merge, shape=(1, 19440, 20480), dtype=uint16, chunksize=(1, 2160, 2560), chunktype=numpy.ndarray>

The memory peak for this run is ~7 G, and the runtime is ~20 seconds.

Labeling options are probably somewhat off, since it seems that we find less labels than expected:

Screenshot from 2022-07-15 13-12-05

tcompa added a commit that referenced this issue Jul 15, 2022
@tcompa
Copy link
Collaborator Author

tcompa commented Jul 15, 2022

BUGFIX:

In the previous comment I kept diameter=40.0, while in fact it needs to change when working on higher levels (since it is in pixel units). After setting diameter=(40.0 / coarsening_xy ** labeling_level) (see bae9bd8), segmentation takes ~1 minute, with peak memory at ~9 G.

And the labels look much more reasonable:

Screenshot from 2022-07-15 13-49-26

@jluethi
Copy link
Collaborator

jluethi commented Jul 15, 2022

@tcompa This looks great! :)
Yes, adapting the diameter certainly is necessary here. It describes how many pixels an object is approximately taking over.

diameter=(40.0 / coarsening_xy ** labeling_level)

This is a good proxy given the input 40. May even be an interesting setup for the users afterwards: We take the diameter at full res and calculate a relevant diameter parameter for the pyramid level

@tcompa
Copy link
Collaborator Author

tcompa commented Jul 15, 2022

As a future reference, when labeling (e.g.) level 2 (and with coarsening_xy=2), the correct .zattrs file for the labels should include

            "datasets": [
                {
                    "path": "0",
                    "coordinateTransformations": [{
                        "type": "scale",
                        "scale": [1.0, 4.0, 4.0]
                    }]
                },
                ...
            ],

I'm still trying to understand what scale arguments should go into the next levels ("path": "1", "path": "2", ..). My doubt is: should scale remain fixed and equal to [1,4,4] for all paths, or should it change as [1,4,4], [1,8,8], [1,16,16], ...?

This is hard to understand ex-post, because it seems that napari doesn't load pyramids when showing a single FOV - and I cannot show a well/plate, since it seems that they do not support labels (*). Thus I'm only viewing level 0.

@jluethi, is this question possibly something obvious to you?

(*) See ongoing work on ome/ome-zarr-py#207 and ome/ome-zarr-py#65.

tcompa added a commit that referenced this issue Jul 15, 2022
@tcompa
Copy link
Collaborator Author

tcompa commented Jul 15, 2022

As of 136931e, we removed upscaling and we added the correct coordinateTransformations field for the first level (the one ending up in the labels/label_DAPI/0 folder).

Tests with napari confirm that the scale is set correctly (without any explicit array upscaling).

We still have to set the correct scale for higher levels, although it currently won't change anything in the visualization.

@jluethi
Copy link
Collaborator

jluethi commented Jul 15, 2022

Yes, the coordinate Transformation should contain this scale. It should actually be relative to whatever we have in the main zarr file. We started this discussion when parsing Yokogawa metadata (see here: fractal-analytics-platform/fractal-tasks-core#25). The UZH test datasets we are using have [1.0, 0.1625, 0.1625] at level 0 and higher levels once we go up (see here: fractal-analytics-platform/fractal-tasks-core#25 for some actual values with 3x coarsening).

We haven't yet included them in the parsing, but that should come soon with metadata parsing and is required to use ROIs well (fractal-analytics-platform/fractal-tasks-core#24). We will want to use this scale metadata per layer to calculate physical pixel dimensions for a given absolute spatial position at a given pyramid layer.

Conclusions:

  1. Yes, higher layers should have changing scales, i.e. [1,4,4], [1,8,8], [1,16,16]
  2. These scales should depend on the scale in the Zarr array (to be implemented), so should be something like: [1.0, 0.1625, 0.1625] , [1.0, 0.325, 0.325], [1.0, 0.65, 0.65], [1.0, 1.3, 1.3] etc. for 2x coarsening

@tcompa
Copy link
Collaborator Author

tcompa commented Jul 15, 2022

Thanks for the explanation/reminder.

This is now fixed with 20371d2, to reproduce this behavior:

  1. Yes, higher layers should have changing scales, i.e. [1,4,4], [1,8,8], [1,16,16]

As for the more general expected behavior (where scale also includes a global prefactor related to physical units), I'll postpone its implementation to when we have a more general discussion about including metadata - which should come up soon together with the ROI discussion.

@tcompa
Copy link
Collaborator Author

tcompa commented Jul 15, 2022

At this point, the prototype task is ready and what is missing is only its integration within fractal.

@jluethi
Copy link
Collaborator

jluethi commented Jul 15, 2022

Great! Looking forward to testing this in Fractal and applying a few new test sets to it :)

Let's make sure we handle the general scales when doing the ROIs then and then update this task to also handle the scales. But first, we can test the prototype task in Fractal on original FOVs and on whole wells in 2D :)

tcompa added a commit that referenced this issue Jul 18, 2022
@tcompa
Copy link
Collaborator Author

tcompa commented Jul 18, 2022

Great! Looking forward to testing this in Fractal and applying a few new test sets to it :)

This is now ready for tests: #109

@jluethi jluethi self-assigned this Jul 19, 2022
@jluethi
Copy link
Collaborator

jluethi commented Jul 20, 2022

With the successful tests in #111, I think we can close this issue.

We can later refactor things a bit to support arbitrary pyramid levels as an input when we add ROI support to the labeling task 🙂

@jluethi jluethi closed this as completed Jul 20, 2022
jacopo-exact added a commit that referenced this issue Nov 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants