-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
finetune没效果 #34
Comments
如果用LoRA需要的step数较多才会看到效果. 如要更快速/稳定可使用全参数fintune |
1,step较多大概是需要多少才会有效果啊? |
|
@lich99 ,请问作者提供的LORA finetune权重训练参数怎么设置的呢?复现不了效果。r=8, lora_alpha=16, droput=0.1, enable_lora=[True, False, True], 其他参数比如MAX_LENGTH, NUM_EPOCHES, LR? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
针对你是谁finetune,输出还是原模型的回答,新finetune的pt模型加载没有问题吧,如下:
tokenizer = AutoTokenizer.from_pretrained("ChatGLM-6B/chatglm-6b", trust_remote_code=True)
model = AutoModel.from_pretrained("ChatGLM-6B/chatglm-6b", trust_remote_code=True).half().cuda()
加载finetune模型
peft_path = "ChatGLM-finetune-LoRA/saved/finetune_test/finetune_test_epoch_2.pt"
model.load_state_dict(torch.load(peft_path), strict=False)
model.eval()
The text was updated successfully, but these errors were encountered: