-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]: add facemap conversion #188
Comments
That would be greatly appreciated! From my basic understanding, it's a mix of Correct me if I'm wrong, I looked quickly through |
@tuanpham96 what does "component" mean in this context? I was expecting num_frames x num_nodes |
from my understanding, it's the components (default = 500) when using the SVD option on the face region of the video. E.g. component 1 could represent some pixel covariation that's related to whisking motion, component 2 could represent some other pixel covariation that's related to nose motion, ... but nose and whiskers are not marked like in pose estimation. I think there's some option with pose estimation and tracking as well but we didn't use that (yet) in the lab. One of the outputs array would be a time series of each of these components. For example, in cell 18 of their tutorial notebook. Another would be an image mask that represents each component, see cell 17 of their tutorial notebook. |
@tuanpham96 I see. I was more thinking about capturing the pose estimation, e.g. this image from the readme: I see that this software also does SVD of the face, so we should think about how best to support both. |
@Atika-Syeda or @carsen-stringer, would one of you mind sharing small example output file(s)? Ideally <10MB, as that will help us integrate this into our testing suite. I see a reference to test_cam1.avi and test_cam2.avi here but I can't find those files. |
What would you like to see added to NeuroConv?
facemap is pose estimation software similar to DLC and SLEAP, but specific to the face of a mouse. It might be possible to use ndx-pose for this and I think it would be useful to support this in NeuroConv. I believe we could use the output of their test suite as example data.
Is your feature request related to a problem?
No response
Do you have any interest in helping implement the feature?
No.
Code of Conduct
The text was updated successfully, but these errors were encountered: