-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
请教下训练的显存需求 #20
Comments
这取决于您添加的层数,以及训练的设置,根据我的经验8卡A100-40G是能够支持ctx-length=4096的预训练的,我试过将LoRA的rank调大到1024,使得lora和我们可训练的参数量相近,此时显存占用也是差不多的 |
噢我理解是llama-pro在预训练时仅需调整新加的block,所以应该远小于全参数训练所需的显存? |
是的,但是如果新增加的要训练的层很多,同样也会带来很大的显存占用,并且训练的时候其实原有模型的参数也需要load进去,尽管不需要微调 |
噢噢。感谢回答!~ |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
想请教下,llama-pro训练的显存需求是多少,和lora比要多多少
The text was updated successfully, but these errors were encountered: