-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[AutoParallel] Support pipeline parallelism backward non-computation clip. #58609
Merged
GhostScreaming
merged 27 commits into
PaddlePaddle:develop
from
GhostScreaming:support_reshard_backward
Nov 2, 2023
Merged
[AutoParallel] Support pipeline parallelism backward non-computation clip. #58609
GhostScreaming
merged 27 commits into
PaddlePaddle:develop
from
GhostScreaming:support_reshard_backward
Nov 2, 2023
+325
−101
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
which is needed for pipeline parallel.
… support_reshard_backward
not allowed to include files in phi/api.
… support_reshard_backward
strategy and dp-mp-pp hybrid strategy are verified. As CI machine only has 2 cards and dp-mp-pp strategy needs 9 GPU cards, such case will be added in testcase later.
CI Machine now as it needs 8 gpus.
… support_reshard_backward
… support_reshard_backward
… support_reshard_backward
…non-computation clip. (PaddlePaddle#58449)" (PaddlePaddle#58601)" This reverts commit 79e24ec.
chenwhql
approved these changes
Nov 2, 2023
zeroRains
pushed a commit
to zeroRains/Paddle
that referenced
this pull request
Nov 8, 2023
…clip. (PaddlePaddle#58609) * [AutoParallel] Support paddle.distributed.reshard construct GradNode, which is needed for pipeline parallel. * Fix problem of CI, and fix pp testcase as review comments advising. * Fix including files problem. * Polish paddle.distributed.reshard implementation according to review comments. * Fix some problems. * Polish code. * Fix problem of failed testcase. * Move reshard function to tensor_utils.h, as files in phi/core is not allowed to include files in phi/api. * Add forgetting file. * Fix some compilation problem. * Remove useless PADDLE_WITH_DISTRIBUTE conditional compilation. * Remove useless PADDLE_WITH_DISTRIBUTE conditional compilation. * Fix problem of WITH_PYTHON=OFF compilation option. * Fix bug of conditional compilation. * [AutoParallel] Support pipeline parallel backward. Both pp single strategy and dp-mp-pp hybrid strategy are verified. As CI machine only has 2 cards and dp-mp-pp strategy needs 9 GPU cards, such case will be added in testcase later. * Polish pipeline parallel backward implementation. * Remove useless modification. * Add MLP dp-mp-pp hybrid strategy testcase, it can't be run on CI Machine now as it needs 8 gpus. * Remove useless modification. * Fix problem of Tensor double free and polish code. * Fix problem of ReshardOutputPartialAxisToReplicated. * Revert "Revert "[AutoParallel] Support pipeline parallelism backward non-computation clip. (PaddlePaddle#58449)" (PaddlePaddle#58601)" This reverts commit 79e24ec.
danleifeng
pushed a commit
to danleifeng/Paddle
that referenced
this pull request
Nov 14, 2023
…clip. (PaddlePaddle#58609) * [AutoParallel] Support paddle.distributed.reshard construct GradNode, which is needed for pipeline parallel. * Fix problem of CI, and fix pp testcase as review comments advising. * Fix including files problem. * Polish paddle.distributed.reshard implementation according to review comments. * Fix some problems. * Polish code. * Fix problem of failed testcase. * Move reshard function to tensor_utils.h, as files in phi/core is not allowed to include files in phi/api. * Add forgetting file. * Fix some compilation problem. * Remove useless PADDLE_WITH_DISTRIBUTE conditional compilation. * Remove useless PADDLE_WITH_DISTRIBUTE conditional compilation. * Fix problem of WITH_PYTHON=OFF compilation option. * Fix bug of conditional compilation. * [AutoParallel] Support pipeline parallel backward. Both pp single strategy and dp-mp-pp hybrid strategy are verified. As CI machine only has 2 cards and dp-mp-pp strategy needs 9 GPU cards, such case will be added in testcase later. * Polish pipeline parallel backward implementation. * Remove useless modification. * Add MLP dp-mp-pp hybrid strategy testcase, it can't be run on CI Machine now as it needs 8 gpus. * Remove useless modification. * Fix problem of Tensor double free and polish code. * Fix problem of ReshardOutputPartialAxisToReplicated. * Revert "Revert "[AutoParallel] Support pipeline parallelism backward non-computation clip. (PaddlePaddle#58449)" (PaddlePaddle#58601)" This reverts commit 79e24ec.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
PR types
Bug fixes
PR changes
Others
Description
Pcard-73145
修复 PR 58449 和 PR 58506 的编译冲突。
支持流水线并行反向的非计算rank计算裁剪。前向PR参考PR 58126,
paddle::distributed::reshard
构建前反向的PR参考PR 58238。重点对创建反向图时,对unintialized的Tensor行为进行了特殊处理。IsRunAutoParallel()
的情况,跳过FillZeroForEmptyGradInput
处理。SetGradInMeta
特殊处理PP的情况GradTensorHolder::add
特殊处理PP的情况,防止反向节点之间的边未连接。