Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bad tracking results with StrongSORT on my custom tracking dataset #669

Closed
1 task done
jurvanwijk opened this issue Dec 27, 2022 · 155 comments
Closed
1 task done

Bad tracking results with StrongSORT on my custom tracking dataset #669

jurvanwijk opened this issue Dec 27, 2022 · 155 comments
Labels
question Further information is requested Stale

Comments

@jurvanwijk
Copy link

jurvanwijk commented Dec 27, 2022

Search before asking

  • I have searched the Yolov5_StrongSORT_OSNet issues and found no similar bug report.

Question

Hello,

I want to detect floating plastics on the water surface. The detection for only yolov5 runs goes quite well. The problem is that I want to count the objects passing and remove the duplicate counts. When running the track.py script, there a many detections missing that were detected in the detect.py script of the regular yolov5:

image

Any idea what might cause this difference & how to fix it?

@jurvanwijk jurvanwijk added the question Further information is requested label Dec 27, 2022
@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Dec 27, 2022

Could you send the complete Yolov5 video?
This usually happens when the detection passed to the tracker aren't stable, but yours look good on this single frame...

@jurvanwijk
Copy link
Author

Hi Mikel, thanks for reacting. It are single images with a 5 second interval (so no video). Is it useful for you to get these images?

@mikel-brostrom
Copy link
Owner

All the information you can give me is relevant 😄

@jurvanwijk
Copy link
Author

Below I posted three images which follow each other up. As said, there is a 5 second interval between the images. I want to be able to recognize the plastics in the frame, give them an ID, such that if the plastic is present in the next frame it will not be counted. As said above, the track.py results are way worse than the regular yolov5 detect.py runs. Any reason/idea why and how to solve it?

image_20221020_095700_01
image_20221020_095706_01
image_20221020_095712_01

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Dec 28, 2022

I see the issue here. The association process relies heavily on IoU and as the elapsed time between frames are high the uncertainty in motion space is also high. So it would make sense to relax some of the configuration parameters. I would start lowering:

https://github.com/mikel-brostrom/Yolov5_StrongSORT_OSNet/blob/0bcdd8c388b8521bd2a495a78a8c890b7fa8f706/trackers/strong_sort/configs/strong_sort.yaml#L6

to, for example, 0.2.

and:

https://github.com/mikel-brostrom/Yolov5_StrongSORT_OSNet/blob/0bcdd8c388b8521bd2a495a78a8c890b7fa8f706/trackers/strong_sort/configs/strong_sort.yaml#L5

to, for example, 0.15.

If you are willing to pass me the video, or the image folder, I could help you out here 😄. If you don't want to post it here you can send it to: yolov5.deepsort.pytorch@gmail.com

@jurvanwijk
Copy link
Author

Many thanks for helping me. I will post the a small batch of the image folder here!:

image_20221020_092812_01.zip

@mikel-brostrom
Copy link
Owner

OK, no IoU overlap between frames I see. Have you tried to hparams above?

@jurvanwijk
Copy link
Author

Yes, little to no improvement. The weird thing is, the output in Jupyter suggests that there is detection going on:
image

Any other suggestions? Would bursts of images help the process, since it will improve the IoU for a single 5 second frame? There would still be 5 seconds between the image bursts, so not sure if it is of any use?

@mikel-brostrom
Copy link
Owner

Would bursts of images help the process

Not sure what you mean.

Btw, my bad. Try increasing the aforementioned values.

@jurvanwijk
Copy link
Author

So the interval of my images is 5 seconds, but it could be possible to get image burst for each of these images every 5 seconds. In stead of 1 image per 5 seconds it would then be 1 burst of images per 5 seconds (so 3-4 frames close to each other). Could that help to identify objects?

I'm playing with the aforementioned values, what are your suggestions for value changes?

@jurvanwijk
Copy link
Author

Changing the values has a positive effect on the detection! It is however stilll common for the model to change the numbering of the items, making counting via the ids still quite challenging. Any idea on the optimal values and/or methods to decrease the id number changes? I've added a few examples of a few succesive images and how the numbering changes.

image333.zip

Any ideas/suggestions are helping!

@mikel-brostrom
Copy link
Owner

Do you have ground truth for your data?

@mikel-brostrom
Copy link
Owner

I am working on a hyperparameter-search script to get the best possible results on custom datasets

@jurvanwijk
Copy link
Author

Yes, I will send a zipfile to your email with both the images and annotations.

Nice, is the hyperparameter search script a long term project or will it be finished in the near future?

@mikel-brostrom
Copy link
Owner

I am developing this as we speak. Maybe by EOB. Otherwise tomorrow. Some snippets:

newplot (1)

newplot (2)

@jurvanwijk
Copy link
Author

Cool, sounds good. Interested to see the results! Could you let me know if it is finished? Or what hyperparameter setup works best?

@mikel-brostrom
Copy link
Owner

Won't have GPU time to run the hparam search for you but will let you know when it is in usable state @jurvanwijk

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Dec 29, 2022

It is in usable state now @jurvanwijk :

git pull

git checkout hps

pip install -r requirements.txt

python3 val.py --evolve --n-trials 1000

@jurvanwijk
Copy link
Author

Thanks, will try it tomrorrow!

@jurvanwijk
Copy link
Author

The last line runs into some errors, the arguments are unrecognized:

"val.py: error: unrecognized arguments: --evolve --n-trials 1000"

Any suggestions on how to fix it?

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Jan 3, 2023

Sorry. I just pushed a major refactor. I separated the evolution from the validation. pull and try the following

$ python evolve.py --tracking-method strongsort --benchmark MOT17 --n-trials 100  # tune strongsort for MOT17
                   --tracking-method ocsort     --benchmark <your-custom-dataset> # tune ocsort for your custom tracking 

Notice that you dataset has to have the same format as MOT for this to work and be placed under val_utils/data

@jurvanwijk
Copy link
Author

Thanks!
Any reason why I should not use strongsort for the custom dataset?

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Jan 3, 2023

Use whatever method that suits your needs 😄 . Those are just usage examples

@mikel-brostrom
Copy link
Owner

Let me know you thoughts regarding the evolve.py script @jurvanwijk and what troubles you are facing 😄

@jurvanwijk
Copy link
Author

@mikel-brostrom, I see that the MOT format is a single annotation for the entire video. Any idea how to convert my yolo annotations to the same format as this MOT format?

@mikel-brostrom
Copy link
Owner

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Jan 3, 2023

Add <frame>, <id> to each bbox and merge all the txts

@jurvanwijk
Copy link
Author

jurvanwijk commented Jan 3, 2023

Hi Mikel, thanks for all the quick responses. Quite new to the subject so it all takes some time for me. I have a few questions:

  • The MOT17 contains both train and test data, do I also need both for the evolve script?
  • The MOT17 folders are divided in DPM, FRCNN and SDP subfolders. Which one will the evolve script use/need? They differ slightly in their annotation.
  • If I understand correctly, I can create a "custom dataset" map in the data folder, where I have a map with the images and a single annotation file with for each line an object annotation in the format: frame, id, bb_left, bb_top, bb_width, bb_height, conf. My annotations do not contain the conf, can I neglect this in the annotation file? Is the evolve script able to automatically find all this by only changing the --benchmark input variable?

Again, many thanks for all the effort and help!

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Jan 3, 2023

The MOT17 contains both train and test data, do I also need both for the evolve script?

No, only one folder is needed, call it train as this is the default folder it will look for for evolution

The MOT17 folders are divided in DPM, FRCNN and SDP subfolders. Which one will the evolve script use/need? They differ slightly in their annotation.

If you only have one sequence for tuning, create a single folder under your train folder. Call it seq1_FRCNN for example

My annotations do not contain the conf, can I neglect this in the annotation file?

Yes

Is the evolve script able to automatically find all this by only changing the --benchmark input variable?

Yes. You will probably be the first one trying it out so expect some rough edges here and there 😄. The new set of OCSORT parameters are evolved using this script so you can expect good results.

@jurvanwijk
Copy link
Author

Something like this

# pred step
self.tracker.predict()

trk_cols = [(255,0,0), (0,255,0), (0,0,255), (128,128,0), ...]

for track in self.tracker.tracks:
    box = track.to_tlwh()
    x1, y1, x2, y2 = self._tlwh_to_xyxy(box)
    col = trk_cols[track.id % len(trk_cols)]
    ori_img = cv2.rectangle(ori_img, (x1, y1), (x2, y2), col, 2)

# update step
self.tracker.update(detections, clss, confs)

And in which def should this be implemented?

@henriksod
Copy link
Contributor

Same place as the predicted tracks were drawn

@jurvanwijk
Copy link
Author

Same place as the predicted tracks were drawn

Appears not to be working. Any idea what might be going wrong?
image

The error I get is the following:
image

But changing the 'tracker' to 'tracks' gives the following error:
image

@mikel-brostrom
Copy link
Owner

One question @henriksod. If we would brute-force this by using the evolve.py script which values do you think are worth searching in the KF for this case?

@henriksod
Copy link
Contributor

Same place as the predicted tracks were drawn

Appears not to be working. Any idea what might be going wrong?
image

The error I get is the following:
image

But changing the 'tracker' to 'tracks' gives the following error:
image

Hi, this should be under the update method, not under the initiate method.

@henriksod
Copy link
Contributor

One question @henriksod. If we would brute-force this by using the evolve.py script which values do you think are worth searching in the KF for this case?

Hi, I would start with the weights (defined in the Kalman filter init) if possible.

However, I have not gotten an answer of whether the deep assoc is working whatsoever with this data? Isn't the deep assoc trained on pedestrians?

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Jan 23, 2023

For overlapping KF preds and detections, the tracks get initialized and tracking starts. This is just for the case of objects with small displacement in the image plane. For most of the objects the KF is undershooting.

@mikel-brostrom
Copy link
Owner

Hi, I would start with the weights (defined in the Kalman filter init) if possible.

Sooo, these:

https://github.com/mikel-brostrom/yolov8_tracking/blob/04505e4684416017eec07a35cf2120aa5b1318ae/trackers/strongsort/sort/kalman_filter.py#L46-L47

?

@henriksod
Copy link
Contributor

For overlapping KF preds and detections, the tracks get initialized and tracking starts. This is just for the case of objects with small displacement in the image plane. For most of the objects the KF is undershooting.

But is it IoU or deep assoc happening? Would be good to know

@mikel-brostrom
Copy link
Owner

When enough IoU between pred an det then the association takes place. This can be seen for a single object here:

#669 (comment)

@mikel-brostrom
Copy link
Owner

@jurvanwijk
Copy link
Author

Same place as the predicted tracks were drawn

Appears not to be working. Any idea what might be going wrong?
image
The error I get is the following:
image
But changing the 'tracker' to 'tracks' gives the following error:
image

Hi, this should be under the update method, not under the initiate method.

Mmm, still appears not to be working when put under 'def update' or under 'pred_n_update_all_tracks'. Only the black boxes without ID's are popping up. Any idea what might going wrong?

@henriksod
Copy link
Contributor

henriksod commented Jan 24, 2023

Did you replace your previous code for drawing the black boxes?

@jurvanwijk
Copy link
Author

What exact piece of code is that?

@mikel-brostrom
Copy link
Owner

Are you willing to share you model @jurvanwijk? In that way we could play around with it and see if we can do some manual tuning to start with.

@jurvanwijk
Copy link
Author

Are you willing to share you model @jurvanwijk? In that way we could play around with it and see if we can do some manual tuning to start with.

Sure, what exactly do you need, only the weights file?

@mikel-brostrom
Copy link
Owner

Yup 😄

@jurvanwijk
Copy link
Author

I've contacted you via email

@mikel-brostrom
Copy link
Owner

Thx! Will look into this soon 😄

@mikel-brostrom
Copy link
Owner

The model you sent @jurvanwijk. It is not a yolov5 model right?

@jurvanwijk
Copy link
Author

The model you sent @jurvanwijk. It is not a yolov5 model right?

It is supposed to be, it was just a weights file right?

@Louis24
Copy link

Louis24 commented Feb 15, 2023

a

I can do this part, thank you mikel for this repo, I have a question here: I can not find the invoking of "--benchmark" in evolve.py

@Louis24
Copy link

Louis24 commented Feb 15, 2023

image
But I found this in val.py

@Louis24
Copy link

Louis24 commented Feb 15, 2023

how should I arrange the folder? Can I put the files like this?
\val_utils\data\cube
images contrain frames and gt contain gt.txt in it?

@jurvanwijk
Copy link
Author

The model you sent @jurvanwijk. It is not a yolov5 model right?

Hi Mikel, do you have any updates regarding this comment?

@github-actions
Copy link

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Jan 12, 2024

Sorry for my late response @jurvanwijk. This will most certainly be useful for your specific use-case: https://github.com/mikel-brostrom/yolo_tracking/releases/tag/v10.0.50

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

4 participants