-
-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
predict images and video test quantized model #847
Comments
👋 Hello @minstai11, thank you for raising an issue about the Ultralytics HUB 🚀! An Ultralytics engineer will be with you shortly, but in the meantime, you can explore our comprehensive HUB Docs for guidance:
If this is a 🐛 Bug Report, please provide a Minimum Reproducible Example (MRE) to help us debug the issue effectively. For your questions regarding image rescaling and testing quantized models, the more details you can provide about your environment and model configurations, the better we can assist. Thank you for your patience and engagement with the Ultralytics community! 😊 |
I am using virtual environment and VS code tool. I am using simple script Load the YOLO model (this can be YOLOv5 or YOLOv8 depending on what you're using) Run inference on a video |
@minstai11 hello! 😊 It looks like you're working with a YOLO model in a virtual environment using VS Code. Let's address your questions: Image RescalingThe rescaling you observed (e.g., 192x224) might be due to the model's internal processing to maintain aspect ratio or optimize performance. If your model was trained on 224x224 inputs, ensure your inference input size matches this to get accurate bounding boxes. You can specify the results = model.predict(source=r'C:\Git\embedded-ai\output_video_2.mp4', save=True, conf=0.6, imgsz=224) Testing Quantized ModelsFor testing quantized models in TFLite format, ensure you're using the correct input size and preprocessing steps. If bounding boxes are incorrect, verify that the model's input dimensions and preprocessing match those used during training. If you suspect a bug, please ensure you're using the latest version of the Ultralytics packages. If the issue persists, providing a Minimum Reproducible Example (MRE) can help us assist you better. Feel free to reach out with more details if needed. Happy coding! 🚀 |
Hi, could be that prediction try to keep aspect ration because I feed 320x240 video? |
still I get video 1/1 (frame 2686/2686) C:\Git\embedded-ai\output_video_2.mp4: 192x224 1 car, 26.0ms |
Could it be that in ultralytics HUB preview mode does not work as expected, I see that model.predict method works much better than model preview in ultralytics hub platform. ? |
Hello @minstai11! 😊 Thanks for reaching out. It's possible that differences in performance between the Key Points to Consider:
Debugging Steps:
If you continue to experience discrepancies, please provide more details or a Minimum Reproducible Example (MRE) to help us investigate further. Feel free to reach out with any more questions. Happy experimenting! 🚀 |
Hello! 😊 Thanks for bringing this to our attention. Differences in results between the HUB and Here are a few things to check:
If the issue persists, please provide more details or a Minimum Reproducible Example (MRE) to help us investigate further. Feel free to reach out with any more questions. We're here to help! 🚀 |
image size, which I have used results_img = model.predict(source=r'C:\Users\stamin\Desktop\image.jpg', save=True, conf=0.6, imgsz=224) |
Another issue I have done the same training with Yolo 5 and I got different results with the same data. Different recall graphs and metrics. how could you explane it? it differs by 2 weeks when I trained |
Hello @minstai11! 😊 Differences in results between YOLOv5 and YOLOv8 can occur due to several factors:
If you continue to see discrepancies, consider running a controlled experiment to isolate the variables. Feel free to share more details if you need further assistance. Happy experimenting! 🚀 |
No, I have done draining on the same Yolo v5 model. |
Hello! 😊 Thanks for the clarification. If you've trained the same YOLOv5 model and are seeing different results, here are a few things to consider:
If the issue persists, try running a controlled experiment to isolate variables. Feel free to share more details if you need further assistance. We're here to help! 🚀 |
No, it is the same data , I have used the same Ultralytics hub, but in 2 weeks difference in time with the same data and my new trained model does not work but previous training has worked, what happened? |
Hello! 😊 Thanks for reaching out and providing the details. It sounds like you're experiencing some unexpected results with your model training on different PCs. Here are a few things to consider:
If the issue persists, try running a controlled experiment with a fixed seed and consistent environment settings. Feel free to share more details if you need further assistance. We're here to help! 🚀 |
Do you understand I am using Ultralytics HUB, I set the same settings ?!!! |
@minstai11 It could be different versions of Some models may not be deterministic, particularly when used with GPUs. GPU CUDA ops are inherently optimized to the point where speed matters more than reproducibility in some default settings, so it may just be this. |
Search before asking
Question
I am interested in why model.predict are processing video and prediction in a way that 320x240 it is using 192x224 rescaling. Why is is done by the method? our model is trained on 224x224x3 input. Do we need to rescale image to represent right bounding boxes?
video 1/1 (frame 2583/2686) C:\output_video_3.mp4: 192x224 1 car, 20.0ms
One more question how to test quantized model in tflite format? I get wrong bounding boxes.
Additional
No response
The text was updated successfully, but these errors were encountered: