-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bad tracking results with StrongSORT on my custom tracking dataset #669
Comments
Could you send the complete Yolov5 video? |
Hi Mikel, thanks for reacting. It are single images with a 5 second interval (so no video). Is it useful for you to get these images? |
All the information you can give me is relevant 😄 |
Below I posted three images which follow each other up. As said, there is a 5 second interval between the images. I want to be able to recognize the plastics in the frame, give them an ID, such that if the plastic is present in the next frame it will not be counted. As said above, the track.py results are way worse than the regular yolov5 detect.py runs. Any reason/idea why and how to solve it? |
I see the issue here. The association process relies heavily on IoU and as the elapsed time between frames are high the uncertainty in motion space is also high. So it would make sense to relax some of the configuration parameters. I would start lowering: to, for example, 0.2. and: to, for example, 0.15. If you are willing to pass me the video, or the image folder, I could help you out here 😄. If you don't want to post it here you can send it to: yolov5.deepsort.pytorch@gmail.com |
Many thanks for helping me. I will post the a small batch of the image folder here!: |
OK, no IoU overlap between frames I see. Have you tried to hparams above? |
Not sure what you mean. Btw, my bad. Try increasing the aforementioned values. |
So the interval of my images is 5 seconds, but it could be possible to get image burst for each of these images every 5 seconds. In stead of 1 image per 5 seconds it would then be 1 burst of images per 5 seconds (so 3-4 frames close to each other). Could that help to identify objects? I'm playing with the aforementioned values, what are your suggestions for value changes? |
Changing the values has a positive effect on the detection! It is however stilll common for the model to change the numbering of the items, making counting via the ids still quite challenging. Any idea on the optimal values and/or methods to decrease the id number changes? I've added a few examples of a few succesive images and how the numbering changes. Any ideas/suggestions are helping! |
Do you have ground truth for your data? |
I am working on a hyperparameter-search script to get the best possible results on custom datasets |
Yes, I will send a zipfile to your email with both the images and annotations. Nice, is the hyperparameter search script a long term project or will it be finished in the near future? |
Cool, sounds good. Interested to see the results! Could you let me know if it is finished? Or what hyperparameter setup works best? |
Won't have GPU time to run the hparam search for you but will let you know when it is in usable state @jurvanwijk |
It is in usable state now @jurvanwijk :
|
Thanks, will try it tomrorrow! |
The last line runs into some errors, the arguments are unrecognized: "val.py: error: unrecognized arguments: --evolve --n-trials 1000" Any suggestions on how to fix it? |
Sorry. I just pushed a major refactor. I separated the evolution from the validation.
Notice that you dataset has to have the same format as MOT for this to work and be placed under |
Thanks! |
Use whatever method that suits your needs 😄 . Those are just usage examples |
Let me know you thoughts regarding the |
@mikel-brostrom, I see that the MOT format is a single annotation for the entire video. Any idea how to convert my yolo annotations to the same format as this MOT format? |
Add |
Hi Mikel, thanks for all the quick responses. Quite new to the subject so it all takes some time for me. I have a few questions:
Again, many thanks for all the effort and help! |
No, only one folder is needed, call it train as this is the default folder it will look for for evolution
If you only have one sequence for tuning, create a single folder under your train folder. Call it
Yes
Yes. You will probably be the first one trying it out so expect some rough edges here and there 😄. The new set of OCSORT parameters are evolved using this script so you can expect good results. |
And in which def should this be implemented? |
Same place as the predicted tracks were drawn |
One question @henriksod. If we would brute-force this by using the |
Hi, I would start with the weights (defined in the Kalman filter init) if possible. However, I have not gotten an answer of whether the deep assoc is working whatsoever with this data? Isn't the deep assoc trained on pedestrians? |
For overlapping KF preds and detections, the tracks get initialized and tracking starts. This is just for the case of objects with small displacement in the image plane. For most of the objects the KF is undershooting. |
Sooo, these: ? |
But is it IoU or deep assoc happening? Would be good to know |
When enough IoU between pred an det then the association takes place. This can be seen for a single object here: |
Unconfirmed tracks like the ones in this dataset are associated by Iou: |
Did you replace your previous code for drawing the black boxes? |
What exact piece of code is that? |
Are you willing to share you model @jurvanwijk? In that way we could play around with it and see if we can do some manual tuning to start with. |
Sure, what exactly do you need, only the weights file? |
Yup 😄 |
I've contacted you via email |
Thx! Will look into this soon 😄 |
The model you sent @jurvanwijk. It is not a yolov5 model right? |
It is supposed to be, it was just a weights file right? |
I can do this part, thank you mikel for this repo, I have a question here: I can not find the invoking of "--benchmark" in evolve.py |
how should I arrange the folder? Can I put the files like this? |
Hi Mikel, do you have any updates regarding this comment? |
👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs. |
Sorry for my late response @jurvanwijk. This will most certainly be useful for your specific use-case: https://github.com/mikel-brostrom/yolo_tracking/releases/tag/v10.0.50 |
Search before asking
Question
Hello,
I want to detect floating plastics on the water surface. The detection for only yolov5 runs goes quite well. The problem is that I want to count the objects passing and remove the duplicate counts. When running the track.py script, there a many detections missing that were detected in the detect.py script of the regular yolov5:
Any idea what might cause this difference & how to fix it?
The text was updated successfully, but these errors were encountered: