We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hello!Can I use LLM Studio to Finetune LLama-2-70B using 16 * A10 (16 * 24G)GPUs ? Can you give me a command if possible?Thanks!
The text was updated successfully, but these errors were encountered:
We currently do not directly support multi-node. I assume this is a multi-node setup, right?
Sorry, something went wrong.
@psinger Yes, it is. Can I finetune it in FSDP + 16-bit LoRA , FSDP + 8-bit qLoRA or FSDP or 4-bit qLoRA.
FSDP deepspeed support is currently ongoing work: #288
And relevant open issues: #98 #239
Multi-node is currently not on the roadmap.
Actually, multi node should be possible with CLI (as using native torchrun), but I never tried myself.
Successfully merging a pull request may close this issue.
Hello!Can I use LLM Studio to Finetune LLama-2-70B using 16 * A10 (16 * 24G)GPUs ?
Can you give me a command if possible?Thanks!
The text was updated successfully, but these errors were encountered: