Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Finalize axes & initial transformation #85

Merged
merged 9 commits into from
Feb 2, 2022

Conversation

constantinpape
Copy link
Contributor

@constantinpape constantinpape commented Jan 27, 2022

Follow up on #57:

  • rename transformations to coordinateTransformations, after discussion in todays meeting with @bogovicj @jbms @lassoan and others
  • remove axisIndices for simplification. Transformations for a subset of axes will be tackled in future work.

Other points from #57 that should still be addressed:

  • order of scale and translation, raised by @will-moore. We actually restrict this to first scale, then translation; I will leave a code comment to point this out.
  • Give mathematical definition of the current transformation, raised by @jbms

Finally, we may want to remove the transformations applied to all transformations levels; I think that this introduces unnecessary complexity without offering much functionality. (Edit: on second thought we may want to keep it because this will be very useful once we have coordinateTransformations for subsets of axes)

@joshmoore @sbesson @will-moore @bogovicj please review with current implementation in ome-zarr-py / ome-napari, vizarr and the upcoming advanced transformation proposal in mind. Review from everyone else is of course also welcome, but please keep in mind that we don't want to introduce fundamental changes at this point for v0.4; these discussions can happen for the upcoming advanced transformation proposal that will be spearheaded by @bogovicj.

latest/index.bs Outdated
They MUST contain at most one `scale` transformation that specifies the pixel size in physical units or time duration.
It also MUST contain at most one `translation` that specifies the offset from the origin in physical units.
The length of the `scale` and/or `translation` array MUST be the same as the length of "axes".
If both `scale` and `translation` are given `translation` must be listed after `scale` to ensure that it is given in physical coordinates. If "coordinateTransformations" is not given, the identity transformation is assumed.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@will-moore we actually restrict the order of scale and translation here to avoid the ambiguity discussed earlier. I suggest we leave this as is for v0.4. (cc @bogovicj, for now we probably don't need a longer explanation on transformation order; but I am sure this will become relevant for the advanced transformation proposal ;))

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good,
As you say I'll still write the explanation on order that I was planning on, but indeed - now you don't need it.
Thanks @constantinpape !

latest/index.bs Outdated Show resolved Hide resolved
latest/index.bs Outdated
}
],
"transformations": [{"type": "scale", "scale": [0.1], "axisIndices": [0]], # the time unit (0.1 milliseconds), which is the same for each scale level
"transformations": [{"type": "scale", "scale": [0.1, 1.0, 1.0, 1.0, 1.0]], # the time unit (0.1 milliseconds), which is the same for each scale level
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we decide to drop the 'global' transformation we need to remove it here as well and move the 0.1 into the per scale transforms.

@@ -263,16 +261,17 @@ Each dictionary in "datasets" MUST contain the field "path", whose value contain
to the current zarr group. The "path"s MUST be ordered from largest (i.e. highest resolution) to smallest.

Each "datasets" dictionary MUST have the same number of dimensions and MUST NOT have more than 5 dimensions. The number of dimensions and order MUST correspond to number and order of "axes".
Each dictionary MAY contain the field "transformations", which contains a list of transformations that map the data coordinates to the physical coordinates (as specified by "axes") for this resolution level.
The transformations are defined according to [[#trafo-md]]. In addition, the transformation types MUST only be `identity`, `translation` or `scale`.
They MUST contain at most one `scale` transformation per axis that specifies the pixel size in physical units.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "per axis" should be kept if we keep the 'global' transformation.

@will-moore
Copy link
Member

While we can still rename things, can we consider renaming translation to translate (as I suggested on #57) since it is more consistent with scale (both verbs) and it matches what napari uses.
If translation is a convention elsewhere then that's OK, but it's always easier to avoid divergence if possible.

latest/index.bs Outdated
The requirements (only `scale` and `translation`, restrictions on order) are in place to provide a simple mapping from data coordinates to physical coordinates while being compatible with the general transformation spec.

Each "multiscales" dictionary MAY contain the field "coordinateTransformations", describing transformations that are applied to each resolution level.
The transformations MUST follow the same rules about allowed types, order, etc. as in "datasets:coordinateTransformations".
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that we have resolved the "datasets:coordinateTransformations", let's return to whether we want to keep the "multiscales:coordinateTransformations", i.e. a transformation that is applied the same way to all scale levels after the individual transforms per scale level. For more context: the motivation for this feature is to refactor a transformation that is applied to all scale levels, e.g. scaling the time interval to be 0.1 seconds:

"axes": [{"name": "t", "type": "time", "unit": "seconds"}, {"name": "y", "type": "space", "unit": "meter"}, {"name": "x", "type": "time", "unit": "meter"}],
# version with transformation only in datasets:
"datasets": [
  {"coordinateTransformations": [{"type": "scale", "scale": [0.1, 0.2, 0.2]}]},  # scale-level 0, phyiscal size is 20 cm
  {"coordinateTransformations": [{"type": "scale", "scale": [0.1, 0.4, 0.4]}]}   # scale-level 1, physical size is 40 cm, time scale is the same
]
# version with additional transformation in multiscales 
"datasets": [
  {"coordinateTransformations": [{"type": "scale", "scale": [1.0, 0.2, 0.2]}]},  # scale-level 0, phyiscal size is 20 cm
  {"coordinateTransformations": [{"type": "scale", "scale": [1.0, 0.4, 0.4]}]}   # scale-level 1, physical size is 40 cm
]
"coordinateTransformations": [{"type": "scale", "scale": [0.1, 1.0, 1.0]}]  # apply the timescale for both resolutions.

For our current transformations it is trivial to express the scale (or translation) in "multiscales:coordinateTransformations" without using "datasets:coordinateTransformations". However, for advanced transformations this is different: take the example of a non-uniform time axis. We could express this with a transformation that has a value for each discrete point along one axis, e.g. 100 values if we have 100 time points. In this case it would be much better to specify this once in "multiscales" and not several times in "datasets".

Given that we don't have this use-cases yet, I would vote to remove "multiscales:coordinateTransformations" from the current version to keep it simple. We can then introduce it (or a similar approach) once it becomes necessary in the next version. But I don't have a very strong opinion on this and would be ok with keeping it if a majority finds it useful already.

cc @bogovicj @will-moore (and comments are welcome from everyone of course)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding is that so far any transformations (including advanced transformations) can always be expressed using only dataset:coordinateTransformations and defining a transformation property at a higher level is primarily for optimization/efficiency purposes.

Within the scope of 0.4, my personal feeling is that the second form proposed above does not offer significant advantages while increasing the complexity of the metadata. For me, that's an incentive to leave this it out of the specification for now and introduce it as part of the next body of work when there are clear use cases.

Also, I assume you are talking about introducing the coordinateTransfomations level at the individual multiscale level i.e. within an element of the multiscales array. One could also imagine defining it at the multiscales level i.e. a transformation that applies to all the multiscales lists (also true for axes).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either in 0.4 or at a later time we will want the global coordinate transformation for the use case of registration. With coordinate transformations that are the output of registration methods, they typically apply to all scales in the same way. While technically possible to duplicate these over the scales, when we support displacement field transformations this will result in unnecessarily large duplicated storage unless the path feature is used.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 thanks for the input @thewtex. Re-reading myself, I think my comment was hyperfocusing on the use case described in #85 (comment) and I still think a top-level scale element brings little value, primarily because scale is proposed to be mandatory at every dataset level.

The registration use case you are bringing up is a good one. Would it be useful to construct such an example where each dataset contains a scale and the multiscale object contains a translation? /cc @constantinpape

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Neuroglancer internally uses a very similar multiscale volume representation to what is proposed here. It specifies the following:

  • For the top-level multiscale volume, there is a coordinate space that indicates:
    • Name of each dimension
    • Unit and scale for each dimension, e.g. "4 nm" or "0.5 s" or "" (for unitless dimensions like channel dimensions), that indicates the "native resolution" of the dataset. For example if the unit is "4 nm", then changing the coordinate by 1 corresponds to a shift of 4 nm in physical space.
    • Optional lower and upper bounds for each dimension.
  • Then for each scale there is an affine transform to transform from the voxel space of the scale to the native resolution indicated at the top level. Typically the affine transform for the first scale is translation only, and the affine transform for subsequent levels are scale-and-translation-only, where the scale factors are the downsampling factors relative to the first scale.

While it breaks down once you introduce rotation or non-linear warping, the concept of a "native resolution" is very useful in practice when dealing with only translation and scale, so it may be valuable to preserve that in the ome-zarr spec.

While just supporting both per-multiscale and per-scale transformation may be somewhat useful to allow a more concise representation, I think it could be even more useful to specify the purpose of the separate transformations. For example, if we wanted to follow what Neuroglancer does internally, we could say that the per-scale transformations should transform to the "native" resolution of the dataset, while the top-level transformation should indicate the native resolution of the dataset. (If the top-level transform is not scale-and-translation-only, then we would not be able to infer a native resolution.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback @thewtex and @jbms. I had similar use-cases as the registration one in mind when proposing the current structure. (But then I considered not introducing it in 0.4 and leaving it for the follow-up proposals to keep things simple, because with the current spec we can only express fairly simple transformations that won't get us all the way for many use-cases.) Anyway, I think given your comments there seems to be some consensus that having the multiscales:coordinateTransformations will be useful, so we might introduce it now already. I will try to get some feedback from @bogovicj on this as well, how this factors in with his initial approach to the extended transformation spec.

@sbesson: yes, we could introduce transformations even a level higher (that apply to each multiscales image), but I would not go there for now and would hope that such use-cases would instead be covered by an upcoming collections spec.

@constantinpape
Copy link
Contributor Author

constantinpape commented Jan 28, 2022

While we can still rename things, can we consider renaming translation to translate (as I suggested on #57) since it is more consistent with scale (both verbs) and it matches what napari uses.
If translation is a convention elsewhere then that's OK, but it's always easier to avoid divergence if possible.

Sorry for missing this in #57 @will-moore. But I am in favor of using nouns for all the transformations to be compatible with future versions (what's the verb for affine?). Note that scale is also a noun (our use here roughly matches 1. or 3. in https://www.collinsdictionary.com/dictionary/english/scale)

@will-moore
Copy link
Member

Hi all, I've added a schema for v0.4 and valid/invalid examples at #87.
If anyone wants to check that these match the current expectations of this PR, and suggest any other invalid examples, that would be great.
Thanks!

@thewtex
Copy link
Contributor

thewtex commented Jan 31, 2022

@constantinpape @will-moore in the documentation and examples, can we add the translation so the resulting image domains at multiple scales overlap? When downsampling, the center of the pixel shifts.

@d-v-b
Copy link
Contributor

d-v-b commented Jan 31, 2022

When downsampling, the center of the pixel shifts.

To elaborate on this, the effect of downsampling on the location of pixel centers is a degree of freedom in the downsampling routine. The multiscale metadata should not contain assumptions about the details of a downsampling routine, which mandates both scale and translation transforms for each scale level.

@jbms
Copy link

jbms commented Jan 31, 2022

@d-v-b @thewtex Regarding pixel centers, the "shift" also depends on whether integer pixel coordinates are on the boundary between two pixels or in the center of a pixel. That should also be defined by this specification in order to specify the meaning of the transformations. Note: Neuroglancer defines the coordinate space such that integer pixel coordinates are on the boundary between two pixels.

@thewtex
Copy link
Contributor

thewtex commented Jan 31, 2022

To elaborate on this, the effect of downsampling on the location of pixel centers is a degree of freedom in the downsampling routine. The multiscale metadata should not contain assumptions about the details of a downsampling routine, which mandates both scale and translation transforms for each scale level.

Yes, we do not have to make mandates on how downsampling is done. In the canonical examples for the most common use case of generating a multi-scale representation, it would be good to have the appropriate translations to help the reader's understanding.

whether integer pixel coordinates are on the boundary between two pixels or in the center of a pixel. That should also be defined by this specification in order to specify the meaning of the transformations.

Yes, we should clarify how a transformation applies to pixels. The center of a pixel is very strongly preferred because it is invariant to the spacing between pixels. It also means, for example, that pixels can be treated like points, #65 , etc. and the transformation is applied in the same way.

@lassoan
Copy link

lassoan commented Jan 31, 2022

Note: Neuroglancer defines the coordinate space such that integer pixel coordinates are on the boundary between two pixels.

In 2D applications using one of the pixel corners as origin is probably more common, but in 3D imaging voxel coordinate system origin is typically the center. In VTK, ITK, and all applications based on these toolkits origin is in the pixel center. In all 3D image file formats that I know of (nrrd, nifti, metaio, dicom) origin is in the pixel center, too.

NGFF standard must specify this (if it hasn't been specified it already).

If any software that uses different coordinate system convention internally then it can convert to/from the standard coordinate system when reading/writing images.

@constantinpape
Copy link
Contributor Author

@jbms @d-v-b @lassoan @thewtex
Good that you bring up the issue of pixel centers. Indeed this needs to be defined clearly in the spec and I think this is not the case yet. I will read the relevant parts carefully again later to see if this is actually true and will open a separate issue about this, as I think that it can be separately discussed from the changes that are proposed here. (And mixing up too many discussions in a single PR makes it very hard to stay focused.)

Regarding examples: I agree that a case with translation should be in the examples. But I think we all agree that it should not be mandatory.

@@ -236,9 +237,6 @@ Additional fields for the entry depend on "type" and are defined by the column `
| `translation` | one of: `"translation":List[float]`, `"path":str` | translation vector, stored either as a list of floats (`"translation"`) or as binary data at a location in this container (`path`). The length of vector defines number of dimensions. |
| `scale` | one of: `"scale":List[float]`, `"path":str` | scale vector, stored either as a list of floats (`scale`) or as binary data at a location in this container (`path`). The length of vector defines number of dimensions. |
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like this table still isn't rendered correctly:
table-rendering
I don't understand why. Could someone with more bikeshed experience give this a shot? cc @joshmoore @will-moore @sbesson

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I tried this way back on the previous PR, but didn't get much joy. I think @joshmoore suggested using an HTML table?

@bogovicj
Copy link
Contributor

bogovicj commented Feb 1, 2022

I am in favor of keeping the "multiscales:coordinateTransformations" for the reasons that @thewtex described.

I agree with @thewtex and @lassoan re pixel centers, and so would be happy to say that explicitly without additional comment for v0.4. Having more details / examples will be important, but let's not delay this version for that. Rather, we should have that that discussion in another issue / new PR that could potentially become v0.4.1.

Thanks @constantinpape !

@constantinpape
Copy link
Contributor Author

Thanks for the comment @bogovicj. Given this feedback, let's keep the "multiscales:coordinateTransformations" in.
Now we only need to decide about additional restrictions about scale cc @sbesson @will-moore:
Do we want to restrict the "multiscales:coordinateTransformations" to not contain scale to avoid redundancy with the "datasets:coordinateTransformations"?
My vote would be to not add this restriction now, as it will complicate the spec description and disallow the one actual use case we have for this in 0.4 (specifying the time axes for all res levels. Let me know what you think.

latest/index.bs Show resolved Hide resolved
@sbesson sbesson modified the milestone: 0.4 Feb 1, 2022
@constantinpape
Copy link
Contributor Author

This PR is good to go from my side now. @sbesson feel free to merge once you think it's good to go or ping me if you discover any more issues.

Copy link
Member

@sbesson sbesson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @constantinpape, the changes proposed match the discussions that happened as part of the last OME-NGFF meeting. Having re-read it carefully, I do not see any outstanding issue so I think it is a great time to freeze the proposed changes and move towards the 0.4 release.

Implementations and real-world examples can help exercising the new axes and transformations concepts introduced as part of this body of work. All feedback can be captured and reviewed together with the next body of work on transformations that will be led by @bogovicj.

latest/index.bs Outdated Show resolved Hide resolved
latest/index.bs Outdated Show resolved Hide resolved
latest/index.bs Outdated Show resolved Hide resolved
@sbesson sbesson merged commit 279b74d into ome:main Feb 2, 2022
github-actions bot added a commit that referenced this pull request Feb 2, 2022
Finalize axes & initial transformation

SHA: 279b74d
Reason: push, by @sbesson

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
@d-v-b
Copy link
Contributor

d-v-b commented Feb 17, 2022

Following up on this suggestion from @thewtex:

Either in 0.4 or at a later time we will want the global coordinate transformation for the use case of registration. With coordinate transformations that are the output of registration methods, they typically apply to all scales in the same way. While technically possible to duplicate these over the scales, when we support displacement field transformations this will result in unnecessarily large duplicated storage unless the path feature is used.

I would like to discuss this global coordinate transformation, because the example given in the 0.4 spec confused me:

"multiscales" : {
...
"datasets": [
    {
        "path": "0",
        "coordinateTransformations": [{"type": "scale", "scale": [1.0, 1.0, 0.5, 0.5, 0.5]}]  # the voxel size for the first scale level (0.5 micrometer)
    }
    {
        "path": "1",
        "coordinateTransformations": [{"type": "scale", "scale": [1.0, 1.0, 1.0, 1.0, 1.0]}]  # the voxel size for the second scale level (downscaled by a factor of 2 -> 1 micrometer)
    },
    {
        "path": "2",
        "coordinateTransformations": [{"type": "scale", "scale": [1.0, 1.0, 2.0, 2.0, 2.0]}]  # the voxel size for the second scale level (downscaled by a factor of 4 -> 2 micrometer)
    }
],
"coordinateTransformations": [{"type": "scale", "scale": [0.1, 1.0, 1.0, 1.0, 1.0]],  # the time unit (0.1 milliseconds), which is the same for each scale level
...
}

In this example, we have two different instances of the same transform applied to each scale level -- the global transform, and the scale-level-specific transform. Which one is correct? Should they be composed? In what order (for two scaling transforms the order doesn't matter...in which case why do we see two scaling transforms in this example?)

I suggest requiring that, for each array in multiscales["datasets"], array["coordinateTransformation"] is exactly the same as the coordinateTransformation attribute found in array.path/.zarray. I.e., multiscales["datasets"][0]["coordinateTransformation"] is complete on its own, without any extra information provided by other metadata fields in multiscales. This ensures that the semantic scope of multiscales["datasets"] is constrained to "transparently refer to a dataset + its metadata", and it also minimizes the number of different and possibly confusing coordinateTransformation objects lying around. Semantically, we can consider this requirement derived from the notion that coordinateTransformations are only properties of arrays, not groups of arrays.

@thewtex raised the concern that all scale levels might share some large coordinateTransformation (e.g., a displacement field), and we could use multiscales["coordinateTransformation"] to avoid redundant copies of the same large piece of metadata. I wonder how valid this concern is -- if a coordinateTransformation is so large that it's prohibitive to have 5 copies in metadata, then it's probably too large to have even once, and the correct approach is to store a reference to this transformation in all cases.

Curious to hear people's thoughts here. Personally I find this example so confusing that I think it needs to be clarified / amended.

cc @bogovicj

@jbms
Copy link

jbms commented Feb 17, 2022

In this example, we have two different instances of the same transform applied to each scale level -- the global transform, and the scale-level-specific transform. Which one is correct? Should they be composed? In what order (for two scaling transforms the order doesn't matter...in which case why do we see two scaling transforms in this example?)

To me it seems natural that we have the following coordinate spaces:

  • world (e.g. what is displayed in a viewer)
  • multiscale
  • dataset

The coordinateTransformations for each dataset indicate a transformation from the dataset coordinate space to the multiscale coordinate space. The coordinateTransformations for the multiscale indicate a transformation from the multiscale coordinate space to the world coordinate space.

In cases where all scales other than the first (base level) are just computed from the base level via downsampling, it may be desirable to use the dataset -> multiscale transformations solely to indicate the downsampling factors and any offsets, and then rely on the multiscale -> world transformation to specify the actual voxel size. That also means that if you later adjust the voxel size (e.g. because you previously estimated it incorrectly) there is just one place to change it, rather than having to change it for every scale as well.

You do raise another issue, though. If we allow coordinateTransformations to be specified as a property of a single array : when referencing such an array as a dataset in the multiscale metadata, are you referring to the base (untransformed) array, or to the transformed array. It seems to me that it would be desirable to be able to specify either. Alternatively (my preference) we could avoid that problem by disallowing coordinate transformations on a plain array, and instead require them to be specified as a separate "view". I would say that if a dataset refers to something that already has a coordinate transformation applied, the coordinate transformation specified for the dataset in the multiscale metadata should compose on top of any coordinate transformations specified by the referenced "dataset".

Note: it is not clear to me if the current spec allows coordinate transformations to be specified for a plain array --- I was under the impression that coordinate transformations could only be specified as part of the multiscale metadata.

There is also the question of whether a dataset in one multiscale group could actually itself be a multiscale group (e.g. to "import" all of the scales from another multiscale).

@d-v-b
Copy link
Contributor

d-v-b commented Feb 17, 2022

Note: it is not clear to me if the current spec allows coordinate transformations to be specified for a plain array --- I was under the impression that coordinate transformations could only be specified as part of the multiscale metadata.

Coordinate transformations are defined for a single array (and I think this is their intended use) -- I believe this is covered here in the spec, although there is no example, and the introductory language here is a little confusing, i think instead of .zattrs it should refer to .zarray to make it clear that the subsequent fields are found in array metadata.

In cases where all scales other than the first (base level) are just computed from the base level via downsampling, it may be desirable to use the dataset -> multiscale transformations solely to indicate the downsampling factors and any offsets, and then rely on the multiscale -> world transformation to specify the actual voxel size.

I think if each dataset in multiscales expresses its dataset -> world transformation, then there is no reason for consuming applications to ever formally represent the downsampling factors. The whole point of representing downsampling factors is to describe the scale and offset of the Nth scale level. If the Nth scale level directly communicates scale and offset, downsampling factors are not needed.

That also means that if you later adjust the voxel size (e.g. because you previously estimated it incorrectly) there is just one place to change it, rather than having to change it for every scale as well.

I agree that this is somewhat convenient, but I think the value of being explicit outweighs the burden in this case.

@d-v-b
Copy link
Contributor

d-v-b commented Feb 17, 2022

(and no discussion of multiscale metadata is complete without a reference to the classic issue: zarr-developers/zarr-specs#50)

@constantinpape
Copy link
Contributor Author

Thanks for raising this concern @d-v-b, a couple of quick comments from my side.

In this example, we have two different instances of the same transform applied to each scale level -- the global transform, and the scale-level-specific transform. Which one is correct? Should they be composed

They should be composed, the transformation in datasets is applied first, then the "global" one in multiscales is applied.

The comment by @jbms summarizes the motivation for having this pretty well, with the caveat that we don't have world in the spec yet and that there might be multiple world transformations e.g. for different registrations. (Related motivation from @thewtex to don't have redundant transformations.)

To me it seems natural that we have the following coordinate spaces:

* world (e.g. what is displayed in a viewer)

* multiscale

* dataset

Personally I find this example confusing that

The idea to give the common timepoint scaing in multiscale. I agree that this example is artificial, it will make more sense once we have more advanced transformations, e.g. for specifiying non-uniform timepoints.

Coordinate transformations are defined for a single array (and I think this is their intended use) -- I believe this is covered here in the spec, although there is no example, and the introductory language here is a little confusing, i think instead of .zattrs it should refer to .zarray to make it clear that the subsequent fields are found in array metadata.

I think you are mixing up a couple of things here. First of all in the zarr spec .zarray only ever contains the metadata relevant for zarr, e.g. the chunk shape and compression. Other metadata is always stored in .zattrs (there is one exception from this done by netcdf, but I think that this is very unfortunate.) I think what you refer to is whether metadata is stored on the group level vs. on the array level:

image-group/
  .zgroup  <- this contains the zarr group metadata (just version atm)
  .zattrs <- this contains the group level metadata, which currently contains the ngff metadata
  scale-level0/  <- the array data for scale 0
    .zarray <- contains the zarr array metadata
    .zattrs <- contains additional metadata for the array, currently not used for ngff metadata
...

Ngff is currently not using scale-level0/.zattrs to store any metadata; this means that no transformations are currently not stored at the per array level.

Furthemore, coordinateTransformation are not defined specifically for a single array, they can describe the transformation of the array data space into a physical space (and that is there current use-case) but will also be useful for describing other transformations, e.g. registering an intermediate physical space to a world coordinate system. This is currently being fleshed out in #94.

I think it needs to be clarified / amended

There are multiple options to go forward:

  • clarification: this is always welcome: please make a specific issue and point out briefly what exactly should be clarified, or make a PR to propose clarifications in https://github.com/ome/ngff/blob/main/latest/index.bs
  • amendment: 0.4 is published and unless there is an internal inconsistency we will not change it more than semantically (e.g. for clarifications). It has been complicated enough to get many viewpoints together to get this release out, so we will not go back on it. So for technical changes there are two options:
    • propose changes for an intermediate release (0.4.1)
    • engage in the discussion around the extended transformation functionality in Proposing spaces and transforms #94 and Transformation types #101 for 0.5.0 (this is the option I would strongly encourage since I am doubtful we would want to change the reference implementations for a 0.4.1 if significant transformation changes are upcoming in 0.5.0)

Finally, I would suggest to move the discussion from this closed PR to either separate issues / PRs for specific problems or the future transformation discussions for general comments on transformations.

@d-v-b
Copy link
Contributor

d-v-b commented Feb 18, 2022

I think you are mixing up a couple of things here. First of all in the zarr spec .zarray only ever contains the metadata relevant for zarr, e.g. the chunk shape and compression. Other metadata is always stored in .zattrs (there is one exception from this done by netcdf, but I think that this is very unfortunate.) I think what you refer to is whether metadata is stored on the group level vs. on the array level:

Quite right! This lack of basic zarr knowledge reflects just how much time I have spent working with n5s :) And I will move this discussion to a separate issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants