-
Notifications
You must be signed in to change notification settings - Fork 631
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I modified a part of the code to enable parallel inference with multiple num_batch #1113
base: main
Are you sure you want to change the base?
Conversation
Great @zhugelaozei ! Can you please fix the formatting by:
|
Does the |
Hello, |
@tonyreina hey. I believe it refers to one image. This is also the usual way to use On the topic. I tried the modifications today and it works. Here are my observations:
With regards to the latter, I suspect it's similar to what I observed running multiple instances of inference on separate processes. One reason is likely the data loading limitation, but more than that, I would suspect it has to do with some low level locking of the GPU operations? I really am not an expert in terms of hardware utilization, but perhaps someone with more experience could shed some light on this topic. Either way, it's a welcome addition and I hope this change is seriously considered. |
Hello. I did quite some work on this a while ago. While I did not submit a PR for this I thought I would submit the link here for reference in the hopes that some of the implementation would help getting this PR merged. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR Overview
This PR enables parallel inference within SAHI when using YOLOv11 by adapting the Ultralytics code for batch processing. Key changes include:
- Removing the conversion of the image to PIL in get_prediction.
- Adding a new num_batch parameter to get_sliced_prediction and updating the slice loop to support multiple batches.
- Adjusting type checks for list-based image inputs in both prediction and model inference methods.
Reviewed Changes
File | Description |
---|---|
sahi/predict.py | Updated prediction and slicing logic to support parallel inference with num_batch. |
sahi/prediction.py | Modified image input handling in the ObjectPrediction constructor. |
sahi/models/ultralytics.py | Adjusted perform_inference logic to handle list inputs and set original image shape correctly. |
Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.
Comments suppressed due to low confidence (2)
sahi/prediction.py:168
- [nitpick] Consider using isinstance(image, list) instead of comparing the type directly for improved robustness.
if type(image) is list:
sahi/models/ultralytics.py:114
- [nitpick] Consider using isinstance(image, list) instead of a direct type equality check for improved robustness.
if type(image) == list:
for _predicion_result in prediction_result.object_prediction_list: | ||
object_prediction_list.extend(_predicion_result) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] Typo detected: '_predicion_result' should be renamed to '_prediction_result' for clarity.
for _predicion_result in prediction_result.object_prediction_list: | |
object_prediction_list.extend(_predicion_result) | |
for _prediction_result in prediction_result.object_prediction_list: | |
object_prediction_list.extend(_prediction_result) |
Copilot is powered by AI, so mistakes are possible. Review output carefully before use.
Hi!Since I found that SAHI cannot perform parallel inference when using YOLOv11 for slicing inference, I modified part of the code and adapted it to work with the relevant parts of Ultralytics' code.Unfortunately, I have only adapted the Ultralytics part of the code for now.I hope this is helpful to you.