You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a DPO training script very similar to stack_llama_2/scripts/dpo_llama2.py. The script works perfectly fine when I run on a single A100 GPU. However, when I use 2 A100, the script is just stuck in 0'th iteration of the training and does not continue.
0%|| 0/11264 [00:00<?, ?it/s]
My model and reference model are defined like this:
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
I have a DPO training script very similar to
stack_llama_2/scripts/dpo_llama2.py
. The script works perfectly fine when I run on a single A100 GPU. However, when I use 2 A100, the script is just stuck in 0'th iteration of the training and does not continue.My model and reference model are defined like this:
My
accelerate config
is like this:The versions of
trl
,peft
,transformers
, andaccelerate
are:My issues is sort of a mix of #151, #226 and #958.
Anyone who could help me out here?
Thanks!
The text was updated successfully, but these errors were encountered: