-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Zero-Dim] Support 0-D tensor for some oneDNN unary kernels #51687
[Zero-Dim] Support 0-D tensor for some oneDNN unary kernels #51687
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
|
1 similar comment
|
d9c4044
to
5d7efbd
Compare
d366376
to
1870101
Compare
@@ -65,7 +65,7 @@ void TransDataLayoutFromOneDNN(DataLayout in_layout, | |||
auto& cpu_engine = dev_ctx->GetEngine(); | |||
|
|||
auto in_tz = vectorize<int64_t>(in.dims()); | |||
auto out_tz = in_tz; | |||
auto out_tz = in_tz.size() != 0 ? in_tz : std::vector<int64_t>{1}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This fragment of code is so common that I think it's worthy to make it a small helper function. Later if oneDNN will add support 0D-dim tensors we will have only one place to update.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added, pls help to review
int rank = x.dims().size() != 0 ? x.dims().size() : 1; | ||
const int canonical_axis = funcs::CanonicalAxis(axis, rank); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as above.
BTW rank can also be const in this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
out = ( | ||
np.apply_along_axis(ref_log_softmax, self.axis, x) | ||
if len(self.shape) > 0 | ||
else np.array(0.0).astype(self.dtype) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
out = ( | |
np.apply_along_axis(ref_log_softmax, self.axis, x) | |
if len(self.shape) > 0 | |
else np.array(0.0).astype(self.dtype) | |
) | |
out = ( | |
np.apply_along_axis(ref_log_softmax, self.axis, x) | |
if len(self.shape) < 0 | |
np.array(0.0).astype(self.dtype) | |
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why we can't use python's ternary operator?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, now I see what you did here.
Because of its multi-lining I got confused :)
@@ -59,6 +68,7 @@ def test_check_grad(self): | |||
self.check_grad(['X'], 'Out') | |||
|
|||
|
|||
# FIXME(xx) no use_mkldnn attr, does this case run into oneDNN? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove this comment. It looks like: "I will never fix it".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
@@ -77,6 +87,7 @@ def test_check_grad(self): | |||
self.check_grad(['X'], 'Out') | |||
|
|||
|
|||
# FIXME(xx) no use_mkldnn attr, does this case run into oneDNN? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove this comment. It looks like: "I will never fix it".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
@@ -122,6 +122,7 @@ def setUp(self): | |||
self.use_mkldnn = False | |||
# explicilty use float32 for ROCm, as MIOpen does not yet support float64 | |||
self.dtype = np.float32 if core.is_compiled_with_rocm() else np.float64 | |||
self.init_kernel_type() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you know why this line is needed now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I overwrite this method in the derived class, to reset the self.use_mkldnn and self.dtype member.
@zhouwei25 Can you please take a review? Thanks:) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have no much problem, LGTM
self.python_api = F.gelu | ||
self.dtype = np.float32 | ||
|
||
x = np.random.uniform(-1, 1, [11, 17]).astype(self.dtype) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个case还是0D不
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, has fixed it now.
self.python_api = F.mish | ||
self.dtype = np.float32 | ||
|
||
x = np.random.uniform(0.1, 1, [2, 4, 3, 5]).astype(self.dtype) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个case还是0D不
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
self.python_api = F.elu | ||
self.set_alpha() | ||
|
||
x = np.random.random((5, 5, 4)).astype("float32") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个case还是0D不(is this case still 0D?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
def setUp(self): | ||
self.op_type = "exp" | ||
self.python_api = paddle.exp | ||
x = np.random.random((5, 5, 4)).astype("float32") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this case still 0D?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
out = ( | ||
np.apply_along_axis(ref_log_softmax, self.axis, x) | ||
if len(self.shape) > 0 | ||
else np.array(0.0).astype(self.dtype) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, now I see what you did here.
Because of its multi-lining I got confused :)
@YangQun1 We have finished validation of this PR on our internal testing system and it passes, so this PR should be functionally correct according to our workload tests. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
你的PR已合入Paddle库,请关注后续测试结果。 |
PR types
New features
PR changes
OPs
Describe
Partially solve #51364, support 0-d tensor for the following onednn elementwise kernels: