-
-
Notifications
You must be signed in to change notification settings - Fork 16.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ignore correct detection wrongly classified #2927
Comments
👋 Hello @nmarchal, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution. If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available. For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com. RequirementsPython 3.8 or later with all requirements.txt dependencies installed, including $ pip install -r requirements.txt EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit. |
@nmarchal good news 😃! Your feature request should now be implemented in ✅ in PR #2928. To receive this update you can:
Single class training has been a train.py feature for a while, and now this PR #2928 should allow models trained with multiple classes to be tested under the single-class assumption:
Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀! |
@glenn-jocher thanks for the very fast response ! I know about single class training and testing, but what I had in mind for robotics application is different. Let's assume we train the network to detect cats and dogs (it's a random example). I want to evaluate the network on both the cats and dogs category, so I would run the normal However, if a cat is classified as a dog, I want to ignore this case in the evaluation (i.e. perhaps we can just ignore this image). The goal is
This solution, which improves precision without affecting recall, is extremely important in robotics because it mimics the situation where a human can review the detection and change the label if necessary. Using the I hope my feature request is clearer now :) |
@nmarchal I don’t quite follow. I also don’t think what you’re describing is officially supported by any known metric definition I’m aware of. You’re free to modify the code to support your custom requirements of course though! |
🚀 Feature
When running object detector on similar objects (i.e. for custom datasets) some objects might be correctly detected but wrongly classified. It would be good to add a flag in the testing code to be able to ignore those cases (my understanding is that these are now considered false positives)
Motivation
In many robotics application, wrongly classified objects are not necessarily an issue. Therefore the community would need to be able to evaluate the network and ignore those cases.
Pitch
I would love to have a flag that can ignore wrongly classified detection during evaluation (i.e. consider them neither true positive nor false positives)
Alternatives
This is a straightforward feature request. An alternative (or another feature) would be to consider wrongly classified objects (good detection bounding box but bad label) as true positives. That would be extremely useful to evaluate the best possible recall if the classification was perfect - and quantify how much classification can be improved.
An idea would be : flag = 0 --> current implementation, flag = 1 --> ignore wrongly classified, flag = 2 --> consider wrongly classified as true positives
Additional context
this is an important feature for robotics and other applications, which would benefit many
The text was updated successfully, but these errors were encountered: