Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE]Finetune LLama-2-70B using 16 * A10 GPUs. #390

Closed
babytdream opened this issue Aug 23, 2023 · 4 comments · Fixed by #288
Closed

[FEATURE]Finetune LLama-2-70B using 16 * A10 GPUs. #390

babytdream opened this issue Aug 23, 2023 · 4 comments · Fixed by #288
Labels
type/feature Feature request

Comments

@babytdream
Copy link

babytdream commented Aug 23, 2023

Hello!Can I use LLM Studio to Finetune LLama-2-70B using 16 * A10 (16 * 24G)GPUs ?
Can you give me a command if possible?Thanks!

@babytdream babytdream added the type/feature Feature request label Aug 23, 2023
@psinger
Copy link
Collaborator

psinger commented Aug 23, 2023

We currently do not directly support multi-node. I assume this is a multi-node setup, right?

@babytdream
Copy link
Author

@psinger Yes, it is. Can I finetune it in FSDP + 16-bit LoRA , FSDP + 8-bit qLoRA or FSDP or 4-bit qLoRA.

@psinger
Copy link
Collaborator

psinger commented Aug 28, 2023

FSDP deepspeed support is currently ongoing work: #288

And relevant open issues:
#98
#239

Multi-node is currently not on the roadmap.

@pascal-pfeiffer
Copy link
Collaborator

Actually, multi node should be possible with CLI (as using native torchrun), but I never tried myself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/feature Feature request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants