Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Recompute: fix bug with transformer attention mask #34664

Conversation

JZ-LIANG
Copy link
Contributor

@JZ-LIANG JZ-LIANG commented Aug 6, 2021

PR types

Bug fixes

PR changes

OPs

Describe

NOTE In Transformer-like network, if user put the attention mask into the recompute segment output,
pylayer will force the stop_gradient of attention mask to be False, which will make the number of
tensor that need grad does not match.
the backward_inputs_with_grad is used to avoid this case.

@paddle-bot-old
Copy link

paddle-bot-old bot commented Aug 6, 2021

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

Copy link
Member

@ForFishes ForFishes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@JZ-LIANG JZ-LIANG merged commit 0dff82c into PaddlePaddle:develop Aug 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants