-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Zero-Dim] support 0-D tensor for reduce/reshape/stack/prelu/expand_v2/gaussion onednn kernels #52185
[Zero-Dim] support 0-D tensor for reduce/reshape/stack/prelu/expand_v2/gaussion onednn kernels #52185
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
❌ The PR is not created using PR's template. You can refer to this Demo. |
@zhouwei25 @xinyu-intel @jczaja could you help to review? thanks |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@OpTestTool.skip_if( | ||
True, | ||
reason="According to Paddle API, None dim means reduce all instead of copy, so just skip this test to avoid potential failure", | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you explain more?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to the doc: https://www.paddlepaddle.org.cn/documentation/docs/en/api/paddle/fluid/layers/nn/reduce_sum_en.html
When dim attr is empty, reduce_sum op should reduce the input tensor to a scalar, instead of just copying the input to output without shape changing. I feel this case is an invalid case, so jsut skip it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay.
@xinyu-intel hi, in #51364, extra |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If reshape/softmax/log_softmax have add UT?
Another thing, squeeze/shape
need to support 0D
Hi, softmax/logsoftmax are convered in #51687. I modified the table in this PR description. |
@YangQun1 it has been confilicted, need to be resolved. |
8d1b70e
94d3509
to
8d1b70e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@YangQun1 hi, can it be merged? we will need it. And squeeze/shape need to support 0D |
this pr is ready, squeeze/shape support is WIP, will open a new PR |
@YuanRisheng @zhouwei25 Can you help review and merge this PR? |
PR types
New features
PR changes
OPs
Describe
For #51364
Support 0-d tensor for the following onednn kernels: