-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trim tracking data while loading #169
Comments
Hi Leo, How do you have these frame numbers stored? It would be fairly easy to crop the detections after loading them. For example, if you had a dictionary
and the same thing for Assuming time/compute aren't super limited, probably the easiest thing is to just trim the videos and then rerun keypoint inference from that starting point. |
Thanks a lot for the quick feedback! Yes, I tried cropping the keypoint locations after loading them and it works perfectly! I see the problem with the calibration and the generation of grid movies. Still, I'm analyzing a relatively big dataset, and cropping the videos would mean almost duplicating the space on our storage solution since I want to keep the raw videos containing the beginning and the end of the experiment. |
I think for calibration you could just use the original (untrimmed) coordinates as input and it would work. For grid movies you could either pad the contents of |
Thanks a lot, that works. Just for future reference for others who might run into this: To generate grid movies, do you know if I only need to pad the kpms.generate_grid_movies(
results,
project_dir,
model_name,
coordinates=coordinates,
keypoints_only=True,
keypoints_scale=1,
use_dims=[0,1], # controls projection plane
**config()); |
Nice workaround for calibration! I actually just had some free time this morning so I decided to bite the bullet and add more formal support for trimming. Would you be willing to beta test? Based on your feedback, I'll merge it into the next release. Here's how you can test it (let me know if I should clarify any of the following steps).
It would be useful if you were able to test calibration, but also to avoid laboriously re-annotating, you can just grab the slope/intercept params that you derived last time and enter them directly into the config. Also you have to delete the file Docs for trimmingIn some datasets, the animal is missing at the beginning and/or end of each video. In these cases, the easiest solution is to trim the videos before running keypoint detection. However, it's also possible to directly trim the inputs to keypoint-MoSeq. Let's assume that you already have a dictionary called
You'll also need to generate a dictionary called
After this, the pipeline can be run as usual, except for steps that involve reading the original videos, in which case
|
Sorry for the delay, I had a batch of experiments to follow. Thanks a lot for the update! I'll install it, test it, and get back to you soon! |
Hi all,
First of all thanks a lot for the great software!
I'm using the software to cluster mouse open-field tracking data. The videos contain, at the beginning and the end, parts that should not be included in the analysis (e.g., placing and removing the mouse from the arena or adjusting light intensity).
For each tracking file, I have the frame numbers for when the analysis should start and end in that file, and I would like to include this information while loading the tracking data so that irrelevant patterns do not influence behavioral clustering.
looking at the source code it seems that this is not possible with the default data loaders.
I wanted to ask if there's a different way of doing this (e.g, trim the data after loading) or if I should look into writing my own data loader that has this functionality.
thanks a lot!
all the best,
leo
The text was updated successfully, but these errors were encountered: