Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

what gpu used to finetune the 2D model? #26

Open
krishnaadithya opened this issue Jun 14, 2023 · 5 comments
Open

what gpu used to finetune the 2D model? #26

krishnaadithya opened this issue Jun 14, 2023 · 5 comments

Comments

@krishnaadithya
Copy link

I treid finetuning the model on 24 gb vram GPU but it runs out of memory and only trains with batch size 2

@anlarro
Copy link

anlarro commented Jul 12, 2023

Hi. Any update on this? I experience the same issue, training with a Tesla V100 32GB only allows me to use a batch size of 3.

@rocketche
Copy link

rocketche commented Feb 29, 2024

I treid finetuning the model on 24 gb vram GPU but it runs out of memory and only trains with batch size 2

You can try it on vit_base instead of vit_large or vit_huge. Please let me know if it helps you. @krishnaadithya @anlarro

@anlarro
Copy link

anlarro commented Feb 29, 2024

I treid finetuning the model on 24 gb vram GPU but it runs out of memory and only trains with batch size 2

You can try it on vit_base instead of vit_large or vit_huge. Please let me know if it helps you. @krishnaadithya @anlarro

Using vit_b with a Tesla V100 32GB only allows me to use a batch size of 3

@WuJunde
Copy link
Collaborator

WuJunde commented Feb 29, 2024

We have added MobileSAM and EffficientSAM as the optional backbone, maybe you can give it a try

@WuJunde
Copy link
Collaborator

WuJunde commented Feb 29, 2024

@anlarro

  • 24-01-11. Added a detailed guide on utilizing the Efficient Med-SAM-Adapter, complete with a comparison of performance and speed. You can find this resource in guidance/efficient_sam.ipynb. Credit: @shinning0821
  • 24-01-14. We've just launched our first official version, v0.1.0-alpha 🥳. This release includes support for MobileSAM, which can be activated by setting -net mobile_sam. Additionally, you now have the flexibility to use ViT, Tiny ViT, and Efficient ViT as encoders. Check the details here. Credit: @shinning0821
  • 24-01-20. Added a guide on utilizing the mobile sam in Med-SAM-Adapter, with a comparison of performance and speed. You can find it in guidance/mobile_sam.ipynb Credit: @shinning0821

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants