Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

confused about the WORLD_SIZE setting on multi-GPU training. #554

Closed
AegeanYan opened this issue Jul 25, 2023 · 2 comments
Closed

confused about the WORLD_SIZE setting on multi-GPU training. #554

AegeanYan opened this issue Jul 25, 2023 · 2 comments

Comments

@AegeanYan
Copy link

@AngainorDev i wonder why the issue #332 and #182 has different world_size setting.
one is
WORLD_SIZE=1 CUDA_VISIBLE_DEVICES=0,1,2,3 python finetune.py --base_model /datas/alpaca_lora_4bit/text-generation-webui/models/llama-30b-hf --data_path /datas/GPT-4-LLM/data --output_dir ./lora-30B

the world_size is set to 1

and another is

OMP_NUM_THREADS=4 WORLD_SIZE=2 CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node=2 --master_port=1234 finetune.py

the world_size is set to the num of GPUs.

is this the torchrun's feature or someone is wrong? thx

@AngainorDev
Copy link
Contributor

I think everyone is right, these are 2 different cases.

One is dataparallelism (full model on several gpus) while the other is model paralelism (one model split onto several gpus).
I did not tried the later myself so can't tell more.

@AegeanYan
Copy link
Author

Thanks for your fast response. I've got that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants