You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use lora to fine-tune llama, and I have two inputs for training, using instruction and text.
And after quantization using "Instruction mode with Alpaca" I only have one input, so how do I test it, do I just splice the instruction and text together?
Thank you!
The text was updated successfully, but these errors were encountered:
Alpaca uses special formatting to separate instructions and data. You can see the templates used for tloen/alpaca-lora. There are two variants, one with just instruction, and one with instruction and input.
Yes, they are spliced together for the token input to the model during training and for generating. When generating, you stop at the EOS special token or if the model generates another ### Instruction: prefix.
I use lora to fine-tune llama, and I have two inputs for training, using instruction and text.
And after quantization using "Instruction mode with Alpaca" I only have one input, so how do I test it, do I just splice the instruction and text together?
Thank you!
The text was updated successfully, but these errors were encountered: