-
Notifications
You must be signed in to change notification settings - Fork 299
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: introduce zfpc decoder + lazy loading wasm #406
base: master
Are you sure you want to change the base?
Conversation
I think the failing test is unrelated to this PR, seems like a GUI check. |
Thanks for your work on this. The changes to split off all of the wasm modules are definitely very useful independent of the codec. At a high level I'm trying to figure out what makes sense as far as including all of these codecs. It seems like you have:
With the splitting off of the wasm modules each codec doesn't necessarily add much client-side overhead, but it does still add maintenance burden to Neuroglancer since once a codec is added it will have to be supported indefinitely. Therefore I would like to try to minimize the number of codecs that have to be supported if possible. In particular, if a codec is still under development and likely limited in use to just a single lab, maybe it would make more sense to support it via a separate Neuroglancer branch / deployment, and then once it is mature and more widely used could be merged into the upstream Neuroglancer repo. A possible middle ground might be some sort of equivalent of the Linux kernel |
Hi Jeremy! Thanks for the feedback. That makes a lot of sense to me. Currently, we are using other branches to visualize these data and that's quite workable. I do like the idea of a staging section as it makes it somewhat more straightforward to deploy a version of neuroglancer that has a mixture of the experimental codecs enabled. However, that does mean that you'd still get extra work supporting those codecs when PRs come in with fixes. Perhaps I should just maintain separate branches that can be PRed to trunk and also an integrated branch that supports all of them for daily use and we can merge individual PRs as codecs become more utilized. |
Hi Jeremy,
I've been working on a special packaging for the zfp lossy floating point compression codec. This codec provides substantial and configurable lossy compression for our 32-bit floating point 2 channel vector fields used for aligning images (almost an order of magnitude).
However, the default zfp data stream is not effective when working with "uncorrelated data" such as z-stacks of XY vectors with the X and Y components uncorrelated. These data are only correlated within a plane and within a vector component. See: https://zfp.readthedocs.io/en/latest/faq.html#q-vfields
Therefore, I created a packaging
zfpc
(the "c" stands for container) that can contain multiple compression streams. That is for each chunk, each component plane is compressed separately leading to2 * Z
streams that are packaged in Fortran order. The logic is generalized so that any X,Y,Z,or W channels can be marked as uncorrelated.This PR contains the C++ decoder needed to visualize the zfpc stream. It also contains changes to lazy load wasm modules for png, compresso, and zfpc.
Thanks for your consideration!
Will