-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ROIs] Parsing pixel sizes and FOV ROIs #23
Comments
Great summary @tcompa. I also created a corresponding milestone for it, so that we can keep an overview over the related issues. Sounds like a good first implementation. In order for this to generalize afterwards, the function that will vary is creating the table in 1. That table is either parsed from the mlf & mrf files as described in 1 (using #25), calculated by Fractal based on some inputs (tbd) or generated by the user (support tbd). If 2-4 are then based on that table, we can either have different tasks for the different input schemes or an input parameter that specifies how the metadata is parsed :) |
Your metadata-parsing function produces a global list of FOV ROIs in the plate, which are not divided into wells - right? Is this a change you could implement quickly? Let us know if you prefer that we take care of it or you can work on it. |
(it would be best if you can do it in the |
Hmm, what kind of table to you get when you parse it? In my test data, I got a table that was double-indexed by well & field of view. Thus, by using the first index column (well_id), you should get the well. See here: #25 If that doesn't work or you don't get a well_id column, let me know and I'll look into it! |
My bad, I take it back. The printout of the table in the terminal is not very clear, and I didn't see the |
One more question: would you mind exposing also the image size in pixels (that is, 2160x2560)? |
Good idea. I will look into making sure this is parsed & checked from the metadata file and will expose it during the refactor to improve code quality of the metadata parsing :) From the Yokogawa side, it's always specified globally (per channel). For our current parsing, I'd say we enforce that the different channels have the same x & y dimensions. But we still add it as a measurement per site, in case users want to provide tables where different sites were imaged differently at some point. |
It seems that napari does not recognize the global {
"multiscales": [
{
"axes": [
{
"name": "c",
"type": "channel"
},
{
"name": "z",
"type": "space",
"unit": "micrometer"
},
{
"name": "y",
"type": "space",
"unit": "micrometer"
},
{
"name": "x",
"type": "space",
"unit": "micrometer"
}
],
"coordinateTransformations": [
{
"scale": [
1.0,
0.16249999999999984,
0.16249999999999984
],
"type": "scale"
}
],
"datasets": [
{
"coordinateTransformations": [
{
"scale": [
1.0,
1.0,
1.0
],
"type": "scale"
}
],
"path": "0"
},
{
"coordinateTransformations": [
{
"scale": [
1.0,
2.0,
2.0
],
"type": "scale"
}
],
"path": "1"
},
{
"coordinateTransformations": [
{
"scale": [
1.0,
4.0,
4.0
],
"type": "scale"
}
],
"path": "2"
},
{
"coordinateTransformations": [
{
"scale": [
1.0,
8.0,
8.0
],
"type": "scale"
}
],
"path": "3"
},
{
"coordinateTransformations": [
{
"scale": [
1.0,
16.0,
16.0
],
"type": "scale"
}
],
"path": "4"
}
],
"version": "0.3"
}
],
"omero": {
"channels": [
{
"coefficient": 1,
"color": "00FFFF",
"family": "linear",
"label": "DAPI",
"window": {
"end": 700,
"max": 65535,
"min": 0,
"start": 0
}
},
{
"coefficient": 1,
"color": "FF00FF",
"family": "linear",
"label": "nanog",
"window": {
"end": 180,
"max": 65535,
"min": 0,
"start": 0
}
},
{
"coefficient": 1,
"color": "FFFF00",
"family": "linear",
"label": "Lamin B1",
"window": {
"end": 1500,
"max": 65535,
"min": 0,
"start": 0
}
}
],
"id": 1,
"name": "TBD",
"version": "0.4"
}
} This is with $ napari --info
napari: 0.4.16
Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.17
System: Ubuntu 22.04 LTS
Python: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0]
Qt: 5.15.2
PyQt5: 5.15.7
NumPy: 1.22.3
SciPy: 1.6.1
Dask: 2022.6.1
VisPy: 0.10.0
OpenGL:
- GL version: 4.6 (Compatibility Profile) Mesa 22.2.0-devel (git-1951065 2022-06-28 jammy-oibaf-ppa)
- MAX_TEXTURE_SIZE: 16384
Screens:
- screen 1: resolution 3840x2160, scale 1.0
Plugins:
- console: 0.0.4
- napari-ome-zarr: 0.5.1
- napari-svg: 0.1.6
- scikit-image: 0.4.16 It is trivial to move this scale information in the different datasets, but I'm wondering where the problem is. |
Update: my guess is that it's a EDIT |
Hmm, do we have a strong reason to use global Work on coordinate transforms in OME-NGFF is planned for the v0.5 release for the fall. I don't know all the details, but I think the system will be generalized a bit. Maybe we stay with the "simple" format of having |
I'm not thinking about how much work it is, but about how clear it would be to look at a It's a small detail, not a serious issue.
It is not an OME-NGFF issue, since the specs are very clear about this option. The best option would be to fix the ome-zarr reader (I can open that discussion over there, if we decide so.. it seems easy to add this feature in a naive way, but not if they prefer to have a more complete PR), or we also can keep it simple and move on with the other solution (store everything in the datasets). I'm fine with both. |
Hmm, I see the point. If it's easy to fix on the ome-zarr front, sure, let's do it. But if not, we always know the pyramid level from the folder index (i.e. 0 is lowest level). Or are you thinking about always specifying the same globale scale and using dataset scalings for e.g. label images generated at level 1? In this spirit: We can also ensure that our way of reading scale always combines global & per dataset (via a lib function). Then, we always switch back and forth, depending on what is implemented downstream Lastly, I agree, it's not an OME-NGFF issues. But changes are coming to OME-NGFF regarding coordinate transformations in the fall. That may be the reason why supporting all of the current spec hasn't been a priority. |
The folder name does not contain information about the actual resolution: when we segment level-2 images, the mask will sit in a folder named
Yes, the first plan was to use the multiscales scaling to set the physical size of pixels at level 0, and the scales of single datasets to set the coarsening scale.
Agreed, this is not a standard. It would just be an implementation detail, but we obviously always read/copy all transformations (both at the multiscale and dataset level).
Agreed. At the moment this is just copying zattrs from one (existing) zarr to another one, and a lib didn't seem necessary. But we can factor out this operation if it becomes useful. All in all, I think the easiest is just to combine scale transformations in a single place, so that we can visualize things with the current ome-zarr reader and we don't have to think too much about the order of transformations (even though we don't use translations at the moment). (let's also touch this point in our call today) |
As of our last call, we proceed as in
and we re-evaluate this issue later on, when related changes in OME-NGFF appear. |
…rs file, including coordinateTransformations (ref #112)
…add _inspect_ROI_table (ref #112 #113)
Current status:
Missing points:
After point 1 is in-place, we can close this issue. Other comments? |
Wow, awesome progress! I'll make sure to update the metadata parsing by the end of the week so we can indeed wrap this up then! This status sounds great. One nitpick:
I would agree for extracting ROIs, that is the point. But for pixel sizes, I think the Zarr file should be the universal ground truth. I don't know whether pixel sizes should remain in the ROI table (they aren't part of a ROI definition, right?). If the ROI is defined in physical units and we adopt some general ROI standard at some point, pixel sizes probably wouldn't be a part of it. How does the current ROI table actually look like? |
This is the content of a (current) ROI table: The places where the pixel sizes are used are:
If that's preferable, we can remove pixel sizes from the ROI table. In that case we will have to retrieve them (for both cases 1 and 2) from the scale attribute of level 0 in a zattrs file. |
Great! Yes, it makes a lot of sense that those two tasks need the pixel size information. But would be great to read this from the Zarr file. Because then, the logic of the tasks work on OME-Zarr files in general and read the metadata directly from the Zarr files. Plus, this reinforces again that the Zarr file is the ground truth and there is just one place where one reads pixel size data from, which is the zarr metadata in the .zattrs (where also the viewer goes to read this metadata from). |
I now updated the An example table looks like this: I pushed it to the tasks branch though. Can you move it over to tasks-ROI if you need it there @tcompa ? Also, I introduced a new dependency: instead of using the xml library (which triggered a bandit B314 issue), I now use |
The simplest is the following:
|
Sure. FYI, from
and obtained 0888d509b20402a387a6a42a2e5cbb78da078441. |
Great, thanks! |
@jluethi, I'm confused by the table shown in #23 (which is the same I obtain, with your code). EDIT: it's most probably my fault, I should be doing something wrong somewhere.. no need you really go and double check, sorry |
Oh, maybe I created a fake metadata file for the test case that still has too many Z level in there. Let me have a quick check! |
Ah yes, my bad. I'll update the metadata file to be correct for the subset of data with have in the 2x2 subset |
Ok, should be fixed now @tcompa . The synthetic Let me know if it works :) |
* Do not store pixel sizes in the FOV-ROI table; * Always parse pixel sizes from the zattrs file, when needed; * Add level arg to extract_zyx_pixel_sizes_from_zattrs.
It does work now, thanks |
After
Let's reopen it if something else comes up. |
Starting from discussion in
we are now splitting the work on ROIs into several steps/issues:
Useful resources:
First sketch of a detailed to-do list:
create_zarr_structure
, during the loop over wells, parse metadata usingparse_yokogawa_metadata
, which returns a table like Parsing metadata from Yokogawa experiments #25..zattrs
(corresponding to a well, in our single-FOV scheme), add a global scale with pixel sizes in physical units, and a per-dataset scale with coarsening prefactors.The text was updated successfully, but these errors were encountered: