-
-
Notifications
You must be signed in to change notification settings - Fork 16.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Small object detection and image sizes #46
Comments
Hello @mbufi, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook , Docker Image, and Google Cloud Quickstart Guide for example environments. If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. If this is a custom model or data training question, please note that Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:
For more information please visit https://www.ultralytics.com. |
@mbufi --img (which is short for --img-size) accepts two values, which are train and test sizes. If you supply one size it uses them for both, so for example:
Training at native resolution will always produce the best results if your hardware/budget allows for it. Significantly different object sizes from the default anchors (as measured in pixels at your training --img) though would require you to modify the anchors as well for best results. Training and inference should be paired at the same resolution for best results. If you plan on inference at 1980 train at 1980. If you plan on inference at 1024, train at that size. Just remember the anchors do not change size, they are fixed in pixel-space, so modify as appropriate if necesary. We offer a hybrid kmeans-genetic evolution algorithm for anchor computation: Lines 657 to 662 in ad71d2d
|
@glenn-jocher Great! this all makes sense now:) Thank you so much for that great description. With that said:
Thank you again for sure dedication! |
@mbufi you can optionally run kmean_anchors() if you feel your objects are not similar in size to the default anchors. You would do this before training, and then manually place the final generation of evolved anchors into your model.yaml file here: Lines 6 to 11 in ad71d2d
We have not tried to use --evolve in this repo yet, so I can't speak for it's status. In any case, this is a much more advanced offline feature (it is not part of training) which you would only try to run if default training is not producing results that are acceptable to you. It requires significant time and resources to produce results. |
@glenn-jocher Awesome. That's what I figured... In the example in the code, where did you get the |
@mbufi there is no text file like this. You can create a custom dataset using coco128.yaml as a template: |
@glenn-jocher Yes, correct. I have my own customdata.yaml The problem I am getting is using the kmeans() algo with my yaml. I know the yaml works because I have trained my own custom model already. I am in the process of generating new anchors:
I even try to run it with the coco128.yaml and the images and it still gives me the same error. For reference from |
@mbufi yes, this is possible since we have not actually updated this function for yolov5 yet. We will try to update it next week. In the meantime you may simply try to pass the directory of your training images as shown in the yaml: Line 11 in ad71d2d
TODO: Update kmeans_anchors() for v5 |
@glenn-jocher Okay. Great. Thanks for all your help! |
Passing the directory directly worked for me:
|
@Jacobsolawetz yes it works. I believe the latest commit allows you to pass the .yaml Do you have a good understanding about the threshold with regards to small objects? I see you are using 4.0. Why is that? |
All, Kmeans has been updated, and AutoAnchor is now implemented. This means anchors are analyzed automatically and updated as necessary. No action is required on the part of the user, this is the new default behavior. You simply train normally as before to get this. git pull or clone again to receive this update. |
@glenn-jocher very nice |
@Jacobsolawetz thanks! I've been meaning to get this done for a while now. To be honest, manually crunching anchors and then slotting them back into a model file is a pretty complicated task that can go wrong in a lot of places, so automating the process should remove those failure points. And of course, I have a feeling poor anchor-data fits may be one of the primary reasons for people seeing x results in a paper, but then turning around to find y results on their custom dataset (where y << x). Hopefully this will help bridge that gap in a painless way. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
@glenn-jocher May I know if the anchor data gets saved into the yaml file after the auto Anchor is run? Need to know the anchors being used for further output processing. |
@foochuanyue You may want to read the autoanchor output, which answers your question. |
@glenn-jocher ok! thanks! |
How do we resize the input video or reduce the fps in the input video? |
@SureshbabuAkash1999 set inference size with --img argument
|
Hello! Thank you for such a great implementation. Amazing inference performance!
I have a few questions that I would like some quick clarification on:
Imagine I have a data base of images of size 1980x1080
When using
train.py
--> what does--img
really do? Does it scale images and keep aspect ratio to then feed into the network at that given size and then calculate the amount of tiles based on stride and dimensions?does the
--img
take parameters [width,height] or [height,width]?If I trained a network using
--img 1280
, what should I set my--img-size
to when usingdetect.py
? 1280 as well?My assumption is that if I have images of 1980x1080 and I want to find small objects in each, I should then train my network with image size 1980 to retain image information correct?
What do you recommend to make the
anchors
in the .yaml for detecting smaller objects? The model is already fantastic as finding small objects, but I am curious if there are any other tips you have on tweaking training parameters to find small objects reliably in images.Trying to use the
--evolve
arg ends up with an error:Thank you in advance!
The text was updated successfully, but these errors were encountered: