Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Zero-Dim] Support 0-D tensor for some oneDNN unary kernels #51687

Merged

Conversation

YangQun1
Copy link
Contributor

@YangQun1 YangQun1 commented Mar 15, 2023

PR types

New features

PR changes

OPs

Describe

Partially solve #51364, support 0-d tensor for the following onednn elementwise kernels:

kernel 0D Usage need to be supported status
abs element-wise unary, 0D->0D DONE
elu element-wise unary, 0D->0D DONE
exp element-wise unary, 0D->0D DONE
gelu element-wise unary, 0D->0D DONE
hardswish element-wise unary, 0D->0D DONE
leaky_relu element-wise unary, 0D->0D DONE
mish element-wise unary, 0D->0D DONE
relu element-wise unary, 0D->0D DONE
relu6 element-wise unary, 0D->0D DONE
sigmoid element-wise unary, 0D->0D DONE
sqrt element-wise unary, 0D->0D DONE
swish element-wise unary, 0D->0D DONE
tanh element-wise unary, 0D->0D DONE
round element-wise unary, 0D->0D DONE
softplus element-wise unary, 0D->0D DONE
cast element-wise unary, 0D->0D DONE
clip element-wise unary, 0D->0D DONE
scale element-wise unary, 0D->0D DONE
softmax when input 0D, outputs is 0D and value is 1.0, grad is 0D and value is 0.0 DONE
log_softmax when input 0D, outputs is 0D and value is 0.0, grad is 0D and value is 0.0 DONE

@paddle-bot
Copy link

paddle-bot bot commented Mar 15, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot paddle-bot bot added contributor External developers status: proposed labels Mar 15, 2023
@CLAassistant
Copy link

CLAassistant commented Mar 15, 2023

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

1 similar comment
@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

@YangQun1 YangQun1 force-pushed the yangqun/eltwise-op-support-0d-tensor branch from d9c4044 to 5d7efbd Compare March 15, 2023 05:32
@YangQun1 YangQun1 marked this pull request as draft March 15, 2023 07:48
@YangQun1 YangQun1 force-pushed the yangqun/eltwise-op-support-0d-tensor branch from d366376 to 1870101 Compare March 15, 2023 14:13
@YangQun1 YangQun1 changed the title [Zero-Dim] Support 0-D tensor for oneDNN elementwise kernels [Zero-Dim] Support 0-D tensor for some oneDNN unary kernels Mar 15, 2023
@jczaja jczaja added the Intel label Mar 17, 2023
@YangQun1 YangQun1 marked this pull request as ready for review March 17, 2023 08:41
@jczaja jczaja requested review from Silv3S, tsocha and jczaja March 17, 2023 10:46
@@ -65,7 +65,7 @@ void TransDataLayoutFromOneDNN(DataLayout in_layout,
auto& cpu_engine = dev_ctx->GetEngine();

auto in_tz = vectorize<int64_t>(in.dims());
auto out_tz = in_tz;
auto out_tz = in_tz.size() != 0 ? in_tz : std::vector<int64_t>{1};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This fragment of code is so common that I think it's worthy to make it a small helper function. Later if oneDNN will add support 0D-dim tensors we will have only one place to update.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added, pls help to review

Comment on lines 35 to 36
int rank = x.dims().size() != 0 ? x.dims().size() : 1;
const int canonical_axis = funcs::CanonicalAxis(axis, rank);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as above.
BTW rank can also be const in this case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Comment on lines +37 to +41
out = (
np.apply_along_axis(ref_log_softmax, self.axis, x)
if len(self.shape) > 0
else np.array(0.0).astype(self.dtype)
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
out = (
np.apply_along_axis(ref_log_softmax, self.axis, x)
if len(self.shape) > 0
else np.array(0.0).astype(self.dtype)
)
out = (
np.apply_along_axis(ref_log_softmax, self.axis, x)
if len(self.shape) < 0
np.array(0.0).astype(self.dtype)
)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why we can't use python's ternary operator?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, now I see what you did here.
Because of its multi-lining I got confused :)

@@ -59,6 +68,7 @@ def test_check_grad(self):
self.check_grad(['X'], 'Out')


# FIXME(xx) no use_mkldnn attr, does this case run into oneDNN?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove this comment. It looks like: "I will never fix it".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@@ -77,6 +87,7 @@ def test_check_grad(self):
self.check_grad(['X'], 'Out')


# FIXME(xx) no use_mkldnn attr, does this case run into oneDNN?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove this comment. It looks like: "I will never fix it".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@@ -122,6 +122,7 @@ def setUp(self):
self.use_mkldnn = False
# explicilty use float32 for ROCm, as MIOpen does not yet support float64
self.dtype = np.float32 if core.is_compiled_with_rocm() else np.float64
self.init_kernel_type()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you know why this line is needed now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xinyu-intel
Copy link
Contributor

@zhouwei25 Can you please take a review? Thanks:)

Copy link
Contributor

@zhwesky2010 zhwesky2010 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have no much problem, LGTM

self.python_api = F.gelu
self.dtype = np.float32

x = np.random.uniform(-1, 1, [11, 17]).astype(self.dtype)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个case还是0D不

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, has fixed it now.

self.python_api = F.mish
self.dtype = np.float32

x = np.random.uniform(0.1, 1, [2, 4, 3, 5]).astype(self.dtype)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个case还是0D不

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

self.python_api = F.elu
self.set_alpha()

x = np.random.random((5, 5, 4)).astype("float32")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个case还是0D不(is this case still 0D?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

def setUp(self):
self.op_type = "exp"
self.python_api = paddle.exp
x = np.random.random((5, 5, 4)).astype("float32")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this case still 0D?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Comment on lines +37 to +41
out = (
np.apply_along_axis(ref_log_softmax, self.axis, x)
if len(self.shape) > 0
else np.array(0.0).astype(self.dtype)
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, now I see what you did here.
Because of its multi-lining I got confused :)

@jczaja
Copy link
Contributor

jczaja commented Mar 20, 2023

@YangQun1 We have finished validation of this PR on our internal testing system and it passes, so this PR should be functionally correct according to our workload tests.

Copy link
Contributor

@jczaja jczaja left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@jczaja jczaja merged commit 2a3d75b into PaddlePaddle:develop Mar 22, 2023
@paddle-bot
Copy link

paddle-bot bot commented Mar 22, 2023

你的PR已合入Paddle库,请关注后续测试结果。
Your PR has been merged into the repository. An official integration test will be conducted later. Stay tuned.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor External developers Intel
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants