-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
the costs of fine-tune #5
Comments
To piggyback on the above question, is there a way to save and load a fined-tuned for later inference? This could save some cost on having to retrain the model from scratch. |
Sorry for not following up on this issue.
Yes, you can load a fine-tuned model for later inference, you can save the model name from the output the |
@kjappelbaum Thanks for your response. Large language model can be so powerful. Perhaps we can develop a open-source model that based on LLaMA(Meta's "Chatgpt") and all researchers can participate. This could be very interesting🤣 |
I'll soon upload a revised model of our paper; we also have some results on consumer hardware in there :) |
From my experience I did a finetune with 4000 rows of data (943,748 trained tokens) and the training cost with openai was $0.38 |
Your idea is really interesting, but I'm more worried about the fine-tune program spending too much money.
The text was updated successfully, but these errors were encountered: