You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@AngainorDev i wonder why the issue #332 and #182 has different world_size setting.
one is WORLD_SIZE=1 CUDA_VISIBLE_DEVICES=0,1,2,3 python finetune.py --base_model /datas/alpaca_lora_4bit/text-generation-webui/models/llama-30b-hf --data_path /datas/GPT-4-LLM/data --output_dir ./lora-30B
I think everyone is right, these are 2 different cases.
One is dataparallelism (full model on several gpus) while the other is model paralelism (one model split onto several gpus).
I did not tried the later myself so can't tell more.
@AngainorDev i wonder why the issue #332 and #182 has different world_size setting.
one is
WORLD_SIZE=1 CUDA_VISIBLE_DEVICES=0,1,2,3 python finetune.py --base_model /datas/alpaca_lora_4bit/text-generation-webui/models/llama-30b-hf --data_path /datas/GPT-4-LLM/data --output_dir ./lora-30B
the world_size is set to 1
and another is
OMP_NUM_THREADS=4 WORLD_SIZE=2 CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node=2 --master_port=1234 finetune.py
the world_size is set to the num of GPUs.
is this the torchrun's feature or someone is wrong? thx
The text was updated successfully, but these errors were encountered: