Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to know our accuracy #3533

Closed
Hussain06061997 opened this issue Jun 8, 2021 · 8 comments · Fixed by #3586 or #3587
Closed

How to know our accuracy #3533

Hussain06061997 opened this issue Jun 8, 2021 · 8 comments · Fixed by #3586 or #3587
Labels
question Further information is requested

Comments

@Hussain06061997
Copy link

❔Question

Anyone please help me to give me detail about my confusion matrix performance

image

Additional context

@Hussain06061997 Hussain06061997 added the question Further information is requested label Jun 8, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Jun 8, 2021

👋 Hello @Hussain06061997, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install run:

$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

@Hussain06061997 your confusion matrix looks great, i.e. good training results. If you'd like to learn more about confusion matrices in general you can visit https://en.wikipedia.org/wiki/Confusion_matrix

@mansi-aggarwal-2504
Copy link

mansi-aggarwal-2504 commented Jun 11, 2021

@glenn-jocher I have a question. The confusion matrix does tell the percentage of correctly labeled objects and suppose I have only one class where it'll give the background FP corresponding to that label as 1.0 i.e. 100% (naturally, since all FP belong to that class).
But I can't compare the FP percentages. For example, one model is mapping 76% objects and second is mapping 81% but the second model is also bringing more false positives.
Model 1:
confusion_matrix

Model 2:
confusion_matrix

I don't get that info from the yolov5 confusion matrix. Is there any way I can get that?

(my images include 50-100 small objects in a single picture so its difficult to check FPs by manual inspection)

@glenn-jocher
Copy link
Member

@mansi-aggarwal-2504 you can comment this line for absolute values:

array = self.matrix / (self.matrix.sum(0).reshape(1, self.nc + 1) + 1E-6) # normalize

@mansi-aggarwal-2504
Copy link

@glenn-jocher it's working. Thank you!

@glenn-jocher
Copy link
Member

glenn-jocher commented Jun 11, 2021

@mansi-aggarwal-2504 good news 😃! I've updated the code in ✅ in PR #3586 and #3587 to make this easier by providing a normalize flag in the ConfisionMatrix plot method. To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload with model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

@glenn-jocher glenn-jocher linked a pull request Jun 11, 2021 that will close this issue
@pravastacaraka
Copy link

@glenn-jocher Can you tell me what is background FN and background FP?

@mansi-aggarwal-2504
Copy link

@pravastacaraka background FN/FP are the percentage of FN/FP that belong to a particular class in your dataset. If you scroll up and see the confusion matrix in my comment, since I have only one class, all FPs are belong to that class and hence, it’s 1.0 (i.e. 100%) and there are 24% flowers that were not detected (FNs).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
4 participants