We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I want to finetune Qwen 2.5 3B with Intel GPU.
Can't do it with the IPEX axolotl.
What's currently the best way to do this?
The text was updated successfully, but these errors were encountered:
I want to finetune Qwen 2.5 3B with Intel GPU. Can't do it with the IPEX axolotl. What's currently the best way to do this?
It seems axolotl begins to support Qwen 2 after V0.5.0. However, ipex-llm only supports axolotl v0.4.0 right now.
That means Qwen 2.5 3B is not supported right now. Can you try other models supported in 0.4.0 ?
If you find a candidate model in 0.4.0. You can use Linux OS + docker container and this guide to fine-tune models.
Sorry, something went wrong.
it doesn't have to be axolotl, i want to know what will work right now.
will torchtune work? or transformers trainer with newish transformers version?
it doesn't have to be axolotl, i want to know what will work right now. will torchtune work? or transformers trainer with newish transformers version?
Torchtune is not supported yet. Also transformers trainer is not recommended. You can use Peft for finetuning.
You can check Running LLM Finetuning using IPEX-LLM on Intel GPU for all supported frameworks and examples.
sorry for late response, closing this now. Thanks @qiyuangong for the advice, i didn't know that folder was in the repo
qiyuangong
No branches or pull requests
I want to finetune Qwen 2.5 3B with Intel GPU.
Can't do it with the IPEX axolotl.
What's currently the best way to do this?
The text was updated successfully, but these errors were encountered: