Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[cherry-pick]Add fused_attention_op: add impl wrappers. #36673

Conversation

limin2021
Copy link
Contributor

PR types

Function optimization

PR changes

OPs

Describe

  1. 功能:本PR的目标是提高attention模块的计算性能。
    为了减少框架层对op的调度开销,本PR通过在C++层手动实现attention模块,对外提供attention 大op;
    为了减少防存开销,本PR采取了两种优化方法:
    (1)在q,k,v计算时通过共享输入X,将该处的gemm,transpose和bias add从三次调用减少为一次;
    (2)使用kernel融合优化技术,在不同cuda kernel之间通过寄存器传输数据;

  2. fused_attention_op 实现的计算逻辑:
    image

  3. fused_attention_op与paddle已有的MultiHeadAttention layer的不同:
    (1)计算逻辑范围扩大了,详见上面的伪代码。
    (2)q, k, v的weight存储格式不一样。
    原有的:保存在三个weight张量中,WQ, WK, WV
    本PR:保存在一个weight张量中,qkv_weight
    由WQ, WK, WV得到qkv_weight的方法:
    image

本PR是新增fused_attention 的第一个PR,主要是为一些算子的实现增加一些wrapper,方便代码复用:
1.Add impl wrappers for gemm and fmha parts in fused_attention_op.
2.Fix bugs in layer_norm and attn_bias_add.cu.h.
3.Fix bugs in elementwise_op_impl.cu.h for ternary elementwise_add impl.

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@lanxianghit lanxianghit merged commit 8c0bacd into PaddlePaddle:release/2.2 Oct 25, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants