You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that the pretrained llama model in the code is in .pth format. So if I have a pretrained llama model in .bin format, can I use it as a Llama base model to train llama-adapter v2 (put it in "--llama_path") ?
If I have a llama (7B) pre-trained with LORA (get a light-weight model like"adapter_model.bin"), can I use it as a Llama base model to train llama-adapter v2 (put it in "--llama_path") ?
thanks a lot
The text was updated successfully, but these errors were encountered:
jzssz
changed the title
question about pretrained llama(with LORA),thanks
question about pretrained llama,thanks
Nov 24, 2023
jzssz
changed the title
question about pretrained llama,thanks
question about Pretrained LLAMA applicable to Llama_adapter model. thanks
Nov 24, 2023
I guess your checkpoint is transformers-based, while our code only support the original llama format (https://github.com/facebookresearch/llama). So you need to download the original llama weights instead of transformers format.
I noticed that the pretrained llama model in the code is in .pth format. So if I have a pretrained llama model in .bin format, can I use it as a Llama base model to train llama-adapter v2 (put it in "--llama_path") ?
If I have a llama (7B) pre-trained with LORA (get a light-weight model like"adapter_model.bin"), can I use it as a Llama base model to train llama-adapter v2 (put it in "--llama_path") ?
thanks a lot
The text was updated successfully, but these errors were encountered: