Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Finalize axes & initial transformation #85

Merged
merged 9 commits into from
Feb 2, 2022
31 changes: 15 additions & 16 deletions latest/index.bs
Original file line number Diff line number Diff line change
Expand Up @@ -222,10 +222,11 @@ keys as specified below for discovering certain types of data, especially images
If part of [[#multiscale-md]], the length of "axes" MUST be equal to the number of dimensions of the arrays that contain the image data.


"transformations" metadata {#trafo-md}
"coordinateTransformations" metadata {#trafo-md}
-------------------------------------

"transformations" describes a series of transformations, e.g. to map discrete data space of an array to the corresponding physical space.
"coordinateTransformations" describe a series of transformations that map between two coordinate spaces (defined by "axes").
For example, to map a discrete data space of an array to the corresponding physical space.
It is a list of dictionaries. Each entry describes a single transformation and MUST contain the field "type".
The value of "type" MUST be one of the elements of the `type` column in the table below.
Additional fields for the entry depend on "type" and are defined by the column `fields`.
Expand All @@ -236,9 +237,6 @@ Additional fields for the entry depend on "type" and are defined by the column `
| `translation` | one of: `"translation":List[float]`, `"path":str` | translation vector, stored either as a list of floats (`"translation"`) or as binary data at a location in this container (`path`). The length of vector defines number of dimensions. |
| `scale` | one of: `"scale":List[float]`, `"path":str` | scale vector, stored either as a list of floats (`scale`) or as binary data at a location in this container (`path`). The length of vector defines number of dimensions. |
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like this table still isn't rendered correctly:
table-rendering
I don't understand why. Could someone with more bikeshed experience give this a shot? cc @joshmoore @will-moore @sbesson

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I tried this way back on the previous PR, but didn't get much joy. I think @joshmoore suggested using an HTML table?


In addition, the field "axisIndices" MAY be given to specify the subset of axes that the transformation is applied to, leaving other axes unchanged. If not given, the transformation is applied to all axes. The length of "axisIndices" MUST be equal to the dimensionality of the transformation. If "axisIndices" are not given, the dimensionality of the transformation MUST be equal to the number of dimensions of the space that the transformation is applied to.
If given, "axisIndices" MUST be given in increasing order. It uses zero-based indexing.

The transformations in the list are applied sequentally and in order.
sbesson marked this conversation as resolved.
Show resolved Hide resolved


Expand All @@ -263,16 +261,17 @@ Each dictionary in "datasets" MUST contain the field "path", whose value contain
to the current zarr group. The "path"s MUST be ordered from largest (i.e. highest resolution) to smallest.

Each "datasets" dictionary MUST have the same number of dimensions and MUST NOT have more than 5 dimensions. The number of dimensions and order MUST correspond to number and order of "axes".
Each dictionary MAY contain the field "transformations", which contains a list of transformations that map the data coordinates to the physical coordinates (as specified by "axes") for this resolution level.
The transformations are defined according to [[#trafo-md]]. In addition, the transformation types MUST only be `identity`, `translation` or `scale`.
They MUST contain at most one `scale` transformation per axis that specifies the pixel size in physical units.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "per axis" should be kept if we keep the 'global' transformation.

It also MUST contain at most one `translation` per axis that specifies the offset from the origin in physical units.
If both `scale` and `translation` are given `translation` must be listed after `scale` to ensure that it is given in physical coordinates. If "transformations" is not given, the identity transformation is assumed.
Each dictionary MAY contain the field "coordinateTransformations", which contains a list of transformations that map the data coordinates to the physical coordinates (as specified by "axes") for this resolution level.
The transformations are defined according to [[#trafo-md]]. The transformation types MUST only be `identity`, `translation` or `scale`.
They MUST contain at most one `scale` transformation that specifies the pixel size in physical units or time duration.
It also MUST contain at most one `translation` that specifies the offset from the origin in physical units.
The length of the `scale` and/or `translation` array MUST be the same as the length of "axes".
If both `scale` and `translation` are given `translation` must be listed after `scale` to ensure that it is given in physical coordinates. If "coordinateTransformations" is not given, the identity transformation is assumed.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@will-moore we actually restrict the order of scale and translation here to avoid the ambiguity discussed earlier. I suggest we leave this as is for v0.4. (cc @bogovicj, for now we probably don't need a longer explanation on transformation order; but I am sure this will become relevant for the advanced transformation proposal ;))

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good,
As you say I'll still write the explanation on order that I was planning on, but indeed - now you don't need it.
Thanks @constantinpape !

The requirements (only `scale` and `translation`, restrictions on order) are in place to provide a simple mapping from data coordinates to physical coordinates while
being compatible with the general transformation spec.

Each "multiscales" dictionary MAY contain the field "transformations", describing transformations that are applied to each resolution level.
The transformations MUST follow the same rules about allowed types, order, etc. as in "datasets:transformations".
Each "multiscales" dictionary MAY contain the field "coordinateTransformations", describing transformations that are applied to each resolution level.
constantinpape marked this conversation as resolved.
Show resolved Hide resolved
The transformations MUST follow the same rules about allowed types, order, etc. as in "datasets:coordinateTransformations".
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that we have resolved the "datasets:coordinateTransformations", let's return to whether we want to keep the "multiscales:coordinateTransformations", i.e. a transformation that is applied the same way to all scale levels after the individual transforms per scale level. For more context: the motivation for this feature is to refactor a transformation that is applied to all scale levels, e.g. scaling the time interval to be 0.1 seconds:

"axes": [{"name": "t", "type": "time", "unit": "seconds"}, {"name": "y", "type": "space", "unit": "meter"}, {"name": "x", "type": "time", "unit": "meter"}],
# version with transformation only in datasets:
"datasets": [
  {"coordinateTransformations": [{"type": "scale", "scale": [0.1, 0.2, 0.2]}]},  # scale-level 0, phyiscal size is 20 cm
  {"coordinateTransformations": [{"type": "scale", "scale": [0.1, 0.4, 0.4]}]}   # scale-level 1, physical size is 40 cm, time scale is the same
]
# version with additional transformation in multiscales 
"datasets": [
  {"coordinateTransformations": [{"type": "scale", "scale": [1.0, 0.2, 0.2]}]},  # scale-level 0, phyiscal size is 20 cm
  {"coordinateTransformations": [{"type": "scale", "scale": [1.0, 0.4, 0.4]}]}   # scale-level 1, physical size is 40 cm
]
"coordinateTransformations": [{"type": "scale", "scale": [0.1, 1.0, 1.0]}]  # apply the timescale for both resolutions.

For our current transformations it is trivial to express the scale (or translation) in "multiscales:coordinateTransformations" without using "datasets:coordinateTransformations". However, for advanced transformations this is different: take the example of a non-uniform time axis. We could express this with a transformation that has a value for each discrete point along one axis, e.g. 100 values if we have 100 time points. In this case it would be much better to specify this once in "multiscales" and not several times in "datasets".

Given that we don't have this use-cases yet, I would vote to remove "multiscales:coordinateTransformations" from the current version to keep it simple. We can then introduce it (or a similar approach) once it becomes necessary in the next version. But I don't have a very strong opinion on this and would be ok with keeping it if a majority finds it useful already.

cc @bogovicj @will-moore (and comments are welcome from everyone of course)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding is that so far any transformations (including advanced transformations) can always be expressed using only dataset:coordinateTransformations and defining a transformation property at a higher level is primarily for optimization/efficiency purposes.

Within the scope of 0.4, my personal feeling is that the second form proposed above does not offer significant advantages while increasing the complexity of the metadata. For me, that's an incentive to leave this it out of the specification for now and introduce it as part of the next body of work when there are clear use cases.

Also, I assume you are talking about introducing the coordinateTransfomations level at the individual multiscale level i.e. within an element of the multiscales array. One could also imagine defining it at the multiscales level i.e. a transformation that applies to all the multiscales lists (also true for axes).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either in 0.4 or at a later time we will want the global coordinate transformation for the use case of registration. With coordinate transformations that are the output of registration methods, they typically apply to all scales in the same way. While technically possible to duplicate these over the scales, when we support displacement field transformations this will result in unnecessarily large duplicated storage unless the path feature is used.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 thanks for the input @thewtex. Re-reading myself, I think my comment was hyperfocusing on the use case described in #85 (comment) and I still think a top-level scale element brings little value, primarily because scale is proposed to be mandatory at every dataset level.

The registration use case you are bringing up is a good one. Would it be useful to construct such an example where each dataset contains a scale and the multiscale object contains a translation? /cc @constantinpape

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Neuroglancer internally uses a very similar multiscale volume representation to what is proposed here. It specifies the following:

  • For the top-level multiscale volume, there is a coordinate space that indicates:
    • Name of each dimension
    • Unit and scale for each dimension, e.g. "4 nm" or "0.5 s" or "" (for unitless dimensions like channel dimensions), that indicates the "native resolution" of the dataset. For example if the unit is "4 nm", then changing the coordinate by 1 corresponds to a shift of 4 nm in physical space.
    • Optional lower and upper bounds for each dimension.
  • Then for each scale there is an affine transform to transform from the voxel space of the scale to the native resolution indicated at the top level. Typically the affine transform for the first scale is translation only, and the affine transform for subsequent levels are scale-and-translation-only, where the scale factors are the downsampling factors relative to the first scale.

While it breaks down once you introduce rotation or non-linear warping, the concept of a "native resolution" is very useful in practice when dealing with only translation and scale, so it may be valuable to preserve that in the ome-zarr spec.

While just supporting both per-multiscale and per-scale transformation may be somewhat useful to allow a more concise representation, I think it could be even more useful to specify the purpose of the separate transformations. For example, if we wanted to follow what Neuroglancer does internally, we could say that the per-scale transformations should transform to the "native" resolution of the dataset, while the top-level transformation should indicate the native resolution of the dataset. (If the top-level transform is not scale-and-translation-only, then we would not be able to infer a native resolution.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback @thewtex and @jbms. I had similar use-cases as the registration one in mind when proposing the current structure. (But then I considered not introducing it in 0.4 and leaving it for the follow-up proposals to keep things simple, because with the current spec we can only express fairly simple transformations that won't get us all the way for many use-cases.) Anyway, I think given your comments there seems to be some consensus that having the multiscales:coordinateTransformations will be useful, so we might introduce it now already. I will try to get some feedback from @bogovicj on this as well, how this factors in with his initial approach to the extended transformation spec.

@sbesson: yes, we could introduce transformations even a level higher (that apply to each multiscales image), but I would not go there for now and would hope that such use-cases would instead be covered by an upcoming collections spec.

These transformations are applied after the per resolution level transformations specified in "datasets". They can for example be used to specify the `scale` for a dimension that is the same for all resolutions.

Each "multiscales" dictionary SHOULD contain the field "name". It SHOULD contain the field "version", which indicates the version of the multiscale metadata of this image (current version is 0.4).
Expand All @@ -296,18 +295,18 @@ It SHOULD contain the field "metadata", which contains a dictionary with additio
"datasets": [
{
"path": "0",
"transformations": [{"type": "scale", "scale": [0.5, 0.5, 0.5], "axisIndices": [2, 3, 4]}] # the voxel size for the first scale level (0.5 micrometer)
"transformations": [{"type": "scale", "scale": [1.0, 1.0, 0.5, 0.5, 0.5]}] # the voxel size for the first scale level (0.5 micrometer)
}
{
"path": "1",
"transformations": [{"type": "scale", "scale": [1.0, 1.0, 1.0], "axisIndices": [2, 3, 4]}] # the voxel size for the second scale level (downscaled by a factor of 2 -> 1 micrometer)
"transformations": [{"type": "scale", "scale": [1.0, 1.0, 1.0, 1.0, 1.0]}] # the voxel size for the second scale level (downscaled by a factor of 2 -> 1 micrometer)
},
{
"path": "2",
"transformations": [{"type": "scale", "scale": [2.0, 2.0, 2.0], "axisIndices": [2, 3, 4]}] # the voxel size for the second scale level (downscaled by a factor of 4 -> 2 micrometer)
"transformations": [{"type": "scale", "scale": [1.0, 1.0, 2.0, 2.0, 2.0]}] # the voxel size for the second scale level (downscaled by a factor of 4 -> 2 micrometer)
}
],
"transformations": [{"type": "scale", "scale": [0.1], "axisIndices": [0]], # the time unit (0.1 milliseconds), which is the same for each scale level
"transformations": [{"type": "scale", "scale": [0.1, 1.0, 1.0, 1.0, 1.0]], # the time unit (0.1 milliseconds), which is the same for each scale level
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we decide to drop the 'global' transformation we need to remove it here as well and move the 0.1 into the per scale transforms.

"type": "gaussian",
"metadata": { # the fields in metadata depend on the downscaling implementation
"method": "skimage.transform.pyramid_gaussian", # here, the paramters passed to the skimage function are given
Expand Down