-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update NDTIFF reader #145
Update NDTIFF reader #145
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haven't tested, but I think the code looks good
# TODO: try casting the dask array into a zarr array | ||
# using `dask.array.to_zarr()`. | ||
# Currently this call brings the data into memory |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe now as_array
returns a dask array without bring the data into memory, I think you can test and remove this comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this means that dask.array.to_zarr()
brings data to memory if the store is a MemoryStore
so this method cannot be consistent with other get_zarr()
methods.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hopefully this won't be a problem anymore after #132 and we refactor this.
Yes, I can acquire such datasets. Where would we store them? Should we update the waveorder collection or create a new one? |
Could work either way. If we do create a new one we could also migrate the existing test datasets over too. |
@ieivanov do you think we should merge this as-is or wait for new test data? |
Opened #149 to track the test dataset issue. Merging. |
* inheriting basefov on ccfov * add deprecation warning to ccfov.scale * activating github pr workflow * Update NDTIFF reader (#145) use new ndtiff and stop sorting axes * add ccfov scales tests * Release timing requirement for I/O-heavy test (#147) * release timing requirement for I/O-heavy test * suppress data size check for arrays --------- Co-authored-by: Ziwen Liu <67518483+ziw-liu@users.noreply.github.com>
Resolves #124.
This bumps
ndtiff
to 2.1.0 so that we can use the automatic axes sorting instead of pulling our own.Currently CI only tests for NDTIFFv2 datasets. @ieivanov Should we add some v3 datasets to the pool?