We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Meta AI has since released LLaMA 2. Additionally, new Apache 2.0 licensed weights are being released as part of the Open LLaMA project.
The text was updated successfully, but these errors were encountered:
Hi @carmocca, I wonder if it supports Full Parameter Supervised Training for LLaMA2 on customized dataset? Thank you.
Sorry, something went wrong.
Yes, full finetuning is supported via finetune/full.py script given a Llama 2 model provided via the --checkpoint_dir in Lit-GPT.
--checkpoint_dir
You can also use a custom dataset given that you prepare it in the right format. You can see the prepare_*.py scripts here for guidance.
prepare_*.py
Is this repo still intended to be supported given since it seems like the lit-gpt repo supports more and newer models.
No branches or pull requests
Meta AI has since released LLaMA 2. Additionally, new Apache 2.0 licensed weights are being released as part of the Open LLaMA project.
To run LLaMA 2 weights, Open LLaMA weights, or Vicuna weights (among other LLaMA-like checkpoints), check out the Lit-GPT repository.
The text was updated successfully, but these errors were encountered: