Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Blogpost idea: how to generate multiscale image arrays #141

Open
GenevieveBuckley opened this issue Jul 27, 2022 · 10 comments
Open

Blogpost idea: how to generate multiscale image arrays #141

GenevieveBuckley opened this issue Jul 27, 2022 · 10 comments

Comments

@GenevieveBuckley
Copy link
Collaborator

This PR is currently in progress, but could be merged soon (for some loose value of "soon", I don't have a good idea of when) ome/ome-zarr-py#192

When it is done, I think it might be nice to have a blogpost about how to generate a multiscale image array and save it to disk, etc.

This is something that surprisingly doesn't seem to have a single, obvious, best way to do it (see discussion ome/ome-zarr-py#215). So when there is a convenience function available, it would be good to highlight that with a blogpost.

Jacob, feel free to nudge me in a few months about this, if you like. (That may or may not work, I can't say for sure I'll be available to do more about it then, but it's worth a try)

@TomAugspurger
Copy link
Member

https://github.com/carbonplan/ndpyramid, from the geospatial context, might be helpful / worth linking to here. It's usage of Dask is pretty hidden behind xarray (but see carbonplan/ndpyramid#10 for a more direct Dask integration).

@GenevieveBuckley
Copy link
Collaborator Author

Ooh, very cool

I actually hadn't heard about ndpyramid. I'm going to have to try that one out, I might actually end up using it all the time. Thanks Tom!

@GenevieveBuckley
Copy link
Collaborator Author

@sofroniewn & @jni have you two seen or used ndpyramid? It looks super useful, especially in a napari context

@sofroniewn
Copy link

sofroniewn commented Jul 28, 2022

no - lol @freeman-lab gotta tell me these things! @joshmoore, have you seen this?

@joshmoore
Copy link

lol @freeman-lab gotta tell me these things!

😆

@joshmoore, have you seen this?

I must admit, yes. But I must also admit to losing track of it. I think @thewtex has one as well as well as @aisenbarth's https://github.com/aeisenbarth/ngff-writer (which flowed into ome/ome-zarr-py#192) . Big 👍🏽 for doing what we can to work together on faster, better, slicker libraries.

@jakirkham
Copy link
Member

There was also a lot of discussion in issue ( pydata/xarray#4118 ) about to handle this use case better. Josh likely has a better handle on where things are there than I.

@jakirkham
Copy link
Member

This PR is currently in progress, but could be merged soon (for some loose value of "soon", I don't have a good idea of when) ome/ome-zarr-py#192

FWIW this just got merged! 🥳

@will-moore
Copy link

Support for dask writing to OME-NGFF (ome/ome-zarr-py#192) is now released in ome-zarr-py 0.6.0

@chrisroat
Copy link

chrisroat commented Apr 6, 2023

There may be some code snippets that work well for smaller data, but when doing large datasets, using your general purpose cluster to do downsampling might be inefficient. It also adds additional tasks to what may be a clean analysis workflow.

For a large pipeline that is processing and dumping a lot of data, it can be cleaner and more efficient to split out the downsampling work. The dask processing cluster can store a dask array in a tensorstore dataset, using the precomputed neuroglancer driver (https://google.github.io/tensorstore/driver/neuroglancer_precomputed/index.html). Separate, dedicated resources can be specified for out-of-band downsampling: a task queue that feeds an igneous cluster that can have CPU/memory/IO tuned efficiently for the dedicated task.

Igneous is very well-developed and maintained. I currently use it locally on 10-100GB datasets regularly, and it always works smoothly -- even supporting sharded neuroglancer formats.

Shards are different than dask blocks - sharding allows much more efficient usage because a shard is written as a large file (allowing for smaller file counts and more efficient data xfer), but the much tinier chunks for visualization are stored within the shard in a format nice for HTTP range requests.

@GenevieveBuckley
Copy link
Collaborator Author

Thanks @chrisroat
Do you have an example of this I can look at? I'm not very familiar with igneous (mostly because I don't usually work with neuro datasets)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants