You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your nice work! I would like to know the minimal resources needed to train the overall pipeline of your model. I have 8 NVIDIA 3090 GPUs with 24GB, is it enough?
The text was updated successfully, but these errors were encountered:
Hello! We've only done training on the 8 x A100 80G. ControlCap does not have much trainable parameters. By reducing the batch size and increases the gradient_accumulate_steps, it is possible to train ControlCap on 3090 24G.
Hello! We've only done training on the 8 x A100 80G. ControlCap does not have much trainable parameters. By reducing the batch size and increases the gradient_accumulate_steps, it is possible to train ControlCap on 3090 24G.
Thank you for your quick reply. I want to know how to change the gradient_accumulate_steps in the configs or somewhere. I didn't find the specific parameters corresponding to this
Hi, there!
Thanks for your nice work! I would like to know the minimal resources needed to train the overall pipeline of your model. I have 8 NVIDIA 3090 GPUs with 24GB, is it enough?
The text was updated successfully, but these errors were encountered: