-
Notifications
You must be signed in to change notification settings - Fork 7.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to test with multi gpu and increase test batches? #33
Comments
Evaluation with batch size > 1 is unsupported at the moment, mainly because for most models it will not produce the exact same output due to padding, therefore not very useful for research. However the models can already accept inputs with batch size>1. |
This comment has been minimized.
This comment has been minimized.
@joeythegod but if you still want to increase the batch, you can do it by hacking the code here: https://github.com/facebookresearch/detectron2/blob/master/detectron2/data/build.py#L398 |
Closing as all the models do support batch size >1 for inference from the beginning. |
Summary: Pull Request resolved: fairinternal/detectron2#64 Differential Revision: D13578812 Pulled By: ppwwyyxx fbshipit-source-id: 12b0b18d79870387232e9af56a7b37f9132a891d
* refine docs * refine docs * refine docs * add matcher docs * add neck docs * refine channel mapper docs Co-authored-by: ntianhe ren <rentianhe@dgx061.scc.idea>
Hi, I would like to know how to test with multi gpu and increase test batches?
The text was updated successfully, but these errors were encountered: