Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I have a problem with low confidence in detecting #12622

Closed
1 task done
thorww opened this issue Jan 13, 2024 · 10 comments
Closed
1 task done

I have a problem with low confidence in detecting #12622

thorww opened this issue Jan 13, 2024 · 10 comments
Labels
question Further information is requested Stale Stale and schedule for closing soon

Comments

@thorww
Copy link

thorww commented Jan 13, 2024

Search before asking

Question

I am trying to use yolo5 to train medical images, and the actual training and verification performance indexes are very good, reaching about 0.9. However, in the reasoning process, the actual frame selection confidence of the image is very low, there are very few boxes higher than 0.5, and the boxes with high confidence in an image rarely cross the actual target frame. May I ask what causes this phenomenon? Or are there any good solutions

Additional

No response

@thorww thorww added the question Further information is requested label Jan 13, 2024
Copy link
Contributor

github-actions bot commented Jan 13, 2024

👋 Hello @thorww, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Requirements

Python>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

Introducing YOLOv8 🚀

We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀!

Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.

Check out our YOLOv8 Docs for details and get started with:

pip install ultralytics

@glenn-jocher
Copy link
Member

@thorww hello! It's great to hear that you're achieving good performance metrics during training and validation with your medical images. If you're experiencing low confidence scores during inference, here are a few suggestions that might help:

  1. Threshold Adjustment: Try adjusting the confidence threshold during inference. The default value might be too high for your specific use case.

  2. Data Distribution: Ensure that the data distribution of your training set closely matches that of your inference set. Differences in distributions can lead to poor generalization.

  3. Augmentation: Review your augmentation strategies. Over-augmentation can sometimes lead to a model that is too invariant to the features that are actually important for detection.

  4. Model Overfitting: High validation metrics but poor inference performance could indicate overfitting. Consider using techniques like dropout, data augmentation, or gathering more diverse training data.

  5. Post-Processing: Look into the Non-Maximum Suppression (NMS) settings. Incorrect NMS parameters can lead to missed detections or low confidence scores.

  6. Model Selection: If you're using a smaller YOLOv5 model (like YOLOv5s), consider using a larger one (like YOLOv5l or YOLOv5x) which might capture more details relevant for medical images.

  7. Batch Size and Learning Rate: If you have not already, experiment with different batch sizes and learning rates, as these can significantly affect model performance.

  8. Review Annotations: Double-check your annotations to ensure they are accurate. Inaccurate or inconsistent annotations can lead to poor model performance.

For more detailed guidance, please refer to our documentation. If the issue persists, feel free to provide more details such as the command you're using for inference, and we can look into it further. Keep up the good work, and remember that the YOLO community and the Ultralytics team are here to support you! 😊👍

@thorww
Copy link
Author

thorww commented Jan 16, 2024

@glenn-jocher Thank you very much for your reply. I am trying your scheme gradually, but I have encountered a more serious problem: At present, there is only one target region for each image in my data set, and several regions with slightly higher confidence in each image tend to be quite different from the reality when reasoning. And even if I use the data set that can reach 0.9 under verification conditions for reasoning, this situation will still occur. I have tried to use some schemes, but they have not been well solved. Could you tell me the cause of this problem or the solution

@glenn-jocher
Copy link
Member

@thorww, it sounds like you're facing a challenging issue. When the model's predictions during inference significantly diverge from the ground truth despite high validation metrics, it could be due to a few reasons:

  1. Domain Shift: If the images used during inference differ in some way from the training/validation images (e.g., different imaging devices, lighting conditions, or preprocessing), the model might struggle to generalize.

  2. Model Calibration: Your model might not be well-calibrated. This means that the confidence scores do not accurately reflect the true likelihood of correct predictions. Research calibration methods like temperature scaling to improve this.

  3. Label Consistency: Ensure that the target regions are consistently labeled across the entire dataset. Inconsistencies can confuse the model.

  4. Inference Settings: Double-check that the inference settings (like image size, augmentation, and confidence thresholds) match those used during training and validation.

  5. Evaluation Metrics: Sometimes, high validation metrics like IoU or mAP might not fully capture the practical performance of the model. Consider using additional metrics that are more aligned with your specific use case.

  6. Analyze Errors: Perform error analysis by reviewing the false positives and false negatives. This can give you insights into what the model is learning and what it's missing.

  7. Ensemble Models: If a single model isn't performing well, consider training multiple models and using ensemble methods to combine their predictions.

  8. Transfer Learning: If you haven't already, use a pretrained model on a similar domain as a starting point for your training.

Remember, diagnosing model performance issues can be complex and may require iterative experimentation. Keep refining your approach based on the insights you gain from each step. If you continue to face difficulties, consider sharing specific examples and details of your training process in the discussions for more targeted advice from the community. Keep pushing forward, and good luck! 👨‍🔬🔍

@thorww
Copy link
Author

thorww commented Jan 17, 2024

@glenn-jocher Thank you. I'll try it step by step

@glenn-jocher
Copy link
Member

@thorww You're welcome! Taking it step by step is a solid approach. Remember, model tuning is often an iterative process, so patience and persistence are key. If you have further questions down the line or need more assistance, don't hesitate to reach out. Best of luck with your model optimization, and I'm confident you'll make great progress! Happy detecting! 😊🚀

Copy link
Contributor

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐

@github-actions github-actions bot added Stale Stale and schedule for closing soon and removed Stale Stale and schedule for closing soon labels Feb 17, 2024
Copy link
Contributor

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐

@github-actions github-actions bot added the Stale Stale and schedule for closing soon label Mar 20, 2024
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 31, 2024
@fathug
Copy link

fathug commented Jan 8, 2025

I meet a similar problem, it is likely that the image was converted to a tensor incorrectly before running inference.

If it is a 3-channel image, you need to put the pixel values ​​of all coordinates of channel 1 into the tensor in order, followed by channels 2 and 3. If you put the values ​​of the 3 channels of each pixel coordinate into the tensor first, then this method is wrong. This wrong method will produce similar situation as you met.

@pderrenger
Copy link
Member

Thank you for your input! You are correct that incorrectly preparing the image tensor can lead to issues during inference. YOLOv5 expects images in a specific format: a 3-channel (RGB) image with pixel values normalized to [0, 1]. Ensure the image is properly loaded and preprocessed using libraries like OpenCV, PIL, or PyTorch. If using custom preprocessing, verify it aligns with YOLOv5's expected input format. For more details on inference, refer to the YOLOv5 PyTorch Hub guide.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale Stale and schedule for closing soon
Projects
None yet
Development

No branches or pull requests

4 participants