Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Benchmark] Optimize bert using fused_ffn and fused_attention #2523

Merged
merged 8 commits into from
Jun 16, 2022
Merged

[Benchmark] Optimize bert using fused_ffn and fused_attention #2523

merged 8 commits into from
Jun 16, 2022

Conversation

FeixLiu
Copy link
Contributor

@FeixLiu FeixLiu commented Jun 15, 2022

PR types

Others

PR changes

Others

Description

Optimize the benchmark performance.

@FeixLiu FeixLiu changed the title [WIP] Bert benchmark optimize Bert benchmark optimize Jun 16, 2022
@ZHUI ZHUI self-requested a review June 16, 2022 03:11
model_zoo/bert/run_pretrain.py Outdated Show resolved Hide resolved
custom_white_list=[
"layer_norm", "softmax", "gelu",
"fused_attention",
"fused_feedforward"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

amp情况下,fused_attention, fused_feedforward 的输入是fp32,那op内部计算走的是fp16还是fp32呢?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fp16的

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

那非 amp 情况下,算子内部是 fp32吗?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

对的

paddlenlp/transformers/bert/modeling.py Outdated Show resolved Hide resolved
self.fuse = fuse
if self.fuse:
self.encoder = nn.LayerList([
FusedTransformerEncoderLayer(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里layer变了的话。state_dict中参数的命名是不是也是变了?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

你是指用非fuse的checkpoint启动fuse的训练?这样应该不支持吧

Copy link
Collaborator

@ZHUI ZHUI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ZHUI ZHUI merged commit bc84454 into PaddlePaddle:develop Jun 16, 2022
@FeixLiu FeixLiu deleted the bert_benchmark_optimize branch June 16, 2022 06:21
@ZHUI ZHUI changed the title Bert benchmark optimize [Benchmark] Optimize bert using fused_ffn and fused_attention Jun 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants