-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[New Format]: ThorLab's tiff file #995
Comments
This is the traceback from using TiffImagingInterface, which is raised after about an hour, way longer than expected for just reading the header files by a data interface module.
|
Hi, thanks for raising the issue. We will take a look into it. |
So for some reason the memamp failed to occur. Let's work through catalystneuro/roiextractors#352 to get a better error message that should inform us what's goin on. Question: Did you try the ScanImageExtractor as suggested in the warning? |
I'm encountering the following error when using the
This makes sense because the data I'm working with was not recorded using MBF's microscope, so the metadata format is different. Regarding the link in the Another issue is handling the multi channel recording. Slicing a memory map entails loading the whole slice to the memory, so even if we have a Suite2p's approach to multiplane multichannel recordings is to split the data by channel and plane and saving them separately, resulting in each raw binary file having a shape of time by height by width. This structure effectively manages large datasets. After conversion, Suite2p can load data from these binary files using the BinaryFile class, which is a wrapper around a memmap. Adding support for BinaryFile in neuroconv or the PyNWB API could provide a straightforward solution for users dealing with similar issues, especially for those working with large datasets like this. |
Yeah, we aim to use memmaps as much as we can but the library that we use is failing to memamp your file as you clearly understood. I think that hack that you pointed out is a good solution. It would be easy for us to create a memmap extractor in roiextractors and then you could use that to write your data to nwb with that wrapper. The implementation looks really simple Some question to clarify:
A separate discussion is on whether we would like something like suit2p does for ThorLab. They rely on the haussmeister code to do the conversion to binary, maybe using this code? If we had a lot of resources we could probably look at the code base and build a better extractor directly form it. But meanwhile I think that the hack that you propose is a good workaround. |
They have renamed the For sample data, I'll message on Slack. |
Thanks for the pointers. Linking this as there is some progress but lacks data: |
What format would you like to see added to NeuroConv?
In an attempt to convert the raw tiff data recorded from ThorLab's mesocscope, TiffImagingInterface raised an out of memory error, due to tiff file not being memory mappable. Moreover, there is no support for multichannel recording for this format. Multi-channel data are stored in one giant file whose frames are alternating between channels in a round robin fashion.
Does the format have any documentation?
https://suite2p.readthedocs.io/en/latest/inputs.html
Existing APIs for format
the data format is supported by suite2p, but I'm unaware of the libraries it uses to read the data.
Do you have any example files you are willing to share?
No response
Do you have any interest in helping implement the feature?
Yes.
Code of Conduct
The text was updated successfully, but these errors were encountered: