-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[hybrid performance] Grad fuse for gradient merge under pipeline mode #35004
[hybrid performance] Grad fuse for gradient merge under pipeline mode #35004
Conversation
Thanks for your contribution! |
python/paddle/fluid/tests/unittests/test_fleet_sharding_meta_optimizer.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
另外再测一组 optimize_cast + fp16_allreduce + fuse_grad_merge 吧
正在跑~~ |
还有优化空间,grad和param按dtype分类之后再fuse,可以减少coalesce op个数以及fused var的个数。下一个pr可以继续优化。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
…e under pipeline mode (PaddlePaddle#35004) (PaddlePaddle#35299)" This reverts commit e931cd1.
PaddlePaddle#35116) (PaddlePaddle#35301)" This reverts commit 2931df5. Revert "[cherry-pick][hybrid performance] optim npu coalesce set constant (PaddlePaddle#35105) (PaddlePaddle#35302)" This reverts commit 12260bd. Revert "[cherry-pick][hybrid performance] optim the grad fuse for pipeline mode by sorting the grad by dtype (PaddlePaddle#35070) (PaddlePaddle#35300)" This reverts commit e69cc21. Revert "[cherry-pick][hybrid performance] Grad fuse for gradient merge under pipeline mode (PaddlePaddle#35004) (PaddlePaddle#35299)" This reverts commit e931cd1. Revert "Add flags to control whether to check Nan value of hccl_allreduce_sum. (PaddlePaddle#35093) (PaddlePaddle#35298)" This reverts commit d4948bc. Revert "[hybrid] Fix row parallel linear bias (PaddlePaddle#35186) (PaddlePaddle#35297)" This reverts commit b36fb03. Revert "[hybrid][npu] fix npu clear float status in pipeline (PaddlePaddle#35165) (PaddlePaddle#35295)" This reverts commit 167685e. Revert "[hybrid npu] fix npu found_finite in hybrid (PaddlePaddle#35134) (PaddlePaddle#35291)" This reverts commit e64105f. Revert "[cherry-pick][Hybrid Performance] Move the cast op of AMP which cast fp32 param to fp16 param to the optimizer (PaddlePaddle#34965) (PaddlePaddle#35296)" This reverts commit 6fb58ae. Revert "[cherry-pick] NPU use squared_l2_norm in GradientClipByGlobalNorm (PaddlePaddle#34836) (PaddlePaddle#35289)" This reverts commit 38c27d5.
PR types
Performance optimization
PR changes
Others
Describe
Fused gradient merge under pipeline mode
The following test are using Ernie 3.0 model on 8 V100 GPUs, with PP=2, MP=2 and DP=2
Throughput compared tokens/s (increase compared with baseline)
Loss compared between baseline and fp16 allreduce
Loss compared between baseline and grad fuse
Loss compared between baseline and fp16 allreduce with grad fuse
Loss compared between baseline and fp16 allreduce, optimizer cast with grad fuse
NPU Loss diff (By Peng Liu)