We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
计算资源为A100 40G LoRA微调设置batch_size=1,max_seq_length=1024勉强能跑(每张卡占用显存40G) ptv2将batch_size和max_seq_length设置得再小都会出现OOM (例如batch_size=1, max_seq_length=16)
辛苦看下是否符合预期。
The text was updated successfully, but these errors were encountered:
使用chatglm-6b-int4 进行ptv2呢 ?
Sorry, something went wrong.
这还没试过,不过还是想尽量跑个非量化版本的。我看官方的ptv2是能正常运行的(batch_size=4, max_seq_length=256),不过他们的问题是没法多卡跑,数据量多了就太慢了。你们是做了相应的改进吗?
你可以对应做个实验看看
确实你这边的ptv2没法跑非量化版本的。你们是在A100 80G上调试的吗?大概需要占用多少显存?
No branches or pull requests
计算资源为A100 40G
LoRA微调设置batch_size=1,max_seq_length=1024勉强能跑(每张卡占用显存40G)
ptv2将batch_size和max_seq_length设置得再小都会出现OOM (例如batch_size=1, max_seq_length=16)
辛苦看下是否符合预期。
The text was updated successfully, but these errors were encountered: