How can I better train yolo-detect? #17582
Replies: 2 comments
-
👋 Hello @SnifferCaptain, thank you for reaching out with your training question! It sounds like you're working on an exciting yet challenging project with your dataset 🚀. We recommend checking our Docs where you might find useful insights, especially our Tips for Best Training Results that could help address the recall issue you're experiencing. If you think a custom loss function might be necessary, our community members on Discord 🎧 enjoy discussions around custom training and might offer novel insights. Also, note that this is an automated response, and an Ultralytics engineer will assist you shortly. If you wish, please provide additional details about your training setup or a minimum reproducible example to help us support you further. To ensure you are using the latest features and fixes that might help, make sure your environment is up-to-date. You can upgrade your pip install -U ultralytics In case you want to ensure you're operating within verified environments, here are suggestions where everything is preinstalled for quick startups: For more robust setups:
Rest assured, once the following badge indicates green, it verifies all Ultralytics CI tests are passing: Your patience is greatly appreciated while awaiting additional support, and we’re excited to see what you'll achieve with your YOLO training! 🎉 |
Beta Was this translation helpful? Give feedback.
-
@SnifferCaptain based on your validation metrics, I can suggest a few targeted adjustments to improve recall while maintaining precision. First, try adjusting the confidence threshold during training to a lower value (e.g., conf=0.1) which can help detect more potential objects. You can also modify the For specific guidance on adjusting these hyperparameters, see the model training tips at https://docs.ultralytics.com/guides/model-training-tips/. If these adjustments don't achieve the desired results, you may want to review your dataset for class imbalance and consider data augmentation techniques. Let us know if you need help implementing these changes or if you'd like to explore additional optimization strategies. |
Beta Was this translation helpful? Give feedback.
-
I've got a hard task in my private dataset. In validation of every epoch, the box's P , map50 and map50-95 are in a reasonable range(about 0.5 in P, 0.38 in map50, 0.28 in map50-95) However, it somehow act badly in recall(just about 0.3 after 30 epochs). Is there any hyper-settings that can make the net put more focus on recall loss? Or should I code a loss function myself?
Beta Was this translation helpful? Give feedback.
All reactions