-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
what gpu used to finetune the 2D model? #26
Comments
Hi. Any update on this? I experience the same issue, training with a Tesla V100 32GB only allows me to use a batch size of 3. |
You can try it on vit_base instead of vit_large or vit_huge. Please let me know if it helps you. @krishnaadithya @anlarro |
Using vit_b with a Tesla V100 32GB only allows me to use a batch size of 3 |
We have added MobileSAM and EffficientSAM as the optional backbone, maybe you can give it a try |
|
I treid finetuning the model on 24 gb vram GPU but it runs out of memory and only trains with batch size 2
The text was updated successfully, but these errors were encountered: