Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speedup FP16 Gelu op using fast math and vectorized 8 kernel #38980

Merged
merged 2 commits into from
Jan 18, 2022

Conversation

sneaxiy
Copy link
Collaborator

@sneaxiy sneaxiy commented Jan 15, 2022

PR types

Performance optimization

PR changes

OPs

Describe

Speed up FP16 op using: (1) vectorized 8 kernel, since GPU has PTX ld instruction to load 4x32bit data; (2) use the PTX fast tanhf instruction tanh.approx.fp32 to speed up the tanhf function. It is enabled when FLAGS_use_fast_math=1.

image

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@limin2021
Copy link
Contributor

在PR描述里可否提供更详细的性能测试结果:(1) 56*seq_len的取值范围更广一些(比如,seq_len_in_batch为30几,40几在真实数据中也是有的) (2)添加与nv mlperf 1.1中 jit gelu的对比

@sneaxiy sneaxiy merged commit 8c20d66 into PaddlePaddle:develop Jan 18, 2022
@sneaxiy sneaxiy deleted the speedup_gelu branch January 18, 2022 02:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants