Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The minimal computational resources? #8

Open
VoyageWang opened this issue Aug 27, 2024 · 2 comments
Open

The minimal computational resources? #8

VoyageWang opened this issue Aug 27, 2024 · 2 comments

Comments

@VoyageWang
Copy link

Hi, there!

Thanks for your nice work! I would like to know the minimal resources needed to train the overall pipeline of your model. I have 8 NVIDIA 3090 GPUs with 24GB, is it enough?

@callsys
Copy link
Owner

callsys commented Aug 27, 2024

Hello! We've only done training on the 8 x A100 80G. ControlCap does not have much trainable parameters. By reducing the batch size and increases the gradient_accumulate_steps, it is possible to train ControlCap on 3090 24G.

@VoyageWang
Copy link
Author

Hello! We've only done training on the 8 x A100 80G. ControlCap does not have much trainable parameters. By reducing the batch size and increases the gradient_accumulate_steps, it is possible to train ControlCap on 3090 24G.

Thank you for your quick reply. I want to know how to change the gradient_accumulate_steps in the configs or somewhere. I didn't find the specific parameters corresponding to this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants