Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ignore correct detection wrongly classified #2927

Closed
nmarchal opened this issue Apr 25, 2021 · 4 comments · Fixed by #2928
Closed

Ignore correct detection wrongly classified #2927

nmarchal opened this issue Apr 25, 2021 · 4 comments · Fixed by #2928
Labels
enhancement New feature or request

Comments

@nmarchal
Copy link

🚀 Feature

When running object detector on similar objects (i.e. for custom datasets) some objects might be correctly detected but wrongly classified. It would be good to add a flag in the testing code to be able to ignore those cases (my understanding is that these are now considered false positives)

Motivation

In many robotics application, wrongly classified objects are not necessarily an issue. Therefore the community would need to be able to evaluate the network and ignore those cases.

Pitch

I would love to have a flag that can ignore wrongly classified detection during evaluation (i.e. consider them neither true positive nor false positives)

Alternatives

This is a straightforward feature request. An alternative (or another feature) would be to consider wrongly classified objects (good detection bounding box but bad label) as true positives. That would be extremely useful to evaluate the best possible recall if the classification was perfect - and quantify how much classification can be improved.

An idea would be : flag = 0 --> current implementation, flag = 1 --> ignore wrongly classified, flag = 2 --> consider wrongly classified as true positives

Additional context

this is an important feature for robotics and other applications, which would benefit many

@nmarchal nmarchal added the enhancement New feature or request label Apr 25, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Apr 25, 2021

👋 Hello @nmarchal, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install run:

$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher glenn-jocher linked a pull request Apr 25, 2021 that will close this issue
@glenn-jocher
Copy link
Member

glenn-jocher commented Apr 25, 2021

@nmarchal good news 😃! Your feature request should now be implemented in ✅ in PR #2928. To receive this update you can:

  • git pull from within your yolov5/ directory
  • git clone https://github.com/ultralytics/yolov5 again
  • Force-reload PyTorch Hub: model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)

Single class training has been a train.py feature for a while, and now this PR #2928 should allow models trained with multiple classes to be tested under the single-class assumption:

python test.py --single-cls

Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

@nmarchal
Copy link
Author

@glenn-jocher thanks for the very fast response !

I know about single class training and testing, but what I had in mind for robotics application is different.

Let's assume we train the network to detect cats and dogs (it's a random example). I want to evaluate the network on both the cats and dogs category, so I would run the normal test.py, without the --single-cls flag, obviously.

However, if a cat is classified as a dog, I want to ignore this case in the evaluation (i.e. perhaps we can just ignore this image). The goal is

  • Recall: The recall will not really be affected, since we ignore both the label and the detection
  • Precision: The precision is improved because we have 1 less false positive.

This solution, which improves precision without affecting recall, is extremely important in robotics because it mimics the situation where a human can review the detection and change the label if necessary.

Using the --single-cls flag does provide a very similar idea, however we lose the information specific to each class, which is necessary in some robotics application.

I hope my feature request is clearer now :)

@glenn-jocher
Copy link
Member

@nmarchal I don’t quite follow. I also don’t think what you’re describing is officially supported by any known metric definition I’m aware of.

You’re free to modify the code to support your custom requirements of course though!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants