[T5
] Enable naive Pipeline Parallelism training for T5
#22535
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What does this PR do?
Similarly as #22329 this PR enables training
T5
models in a "Naive Pipeline Parallelism" setup. What is termed as "Naive Pipeline Parallelism" is simply to spread the model across multiple GPUs and run naively the forward/backward pass by communicating the activations and gradients between each GPU.Without this fix, users will encounter device mismatch issues when training this model that has been loaded across multiple GPUs. Hence, the fix is to manually set the device of the
labels
to the same device aslm_logits
.A simple snippet to reproduce the behaviour below (this needs to be run on a multi-gpu env):
Error trace:
cc @sgugger
Related issues:
huggingface/peft#242
huggingface/peft#205