Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[oneDNN] Refactoring of softmax grad onednn kernel to match common API #32851

Merged
merged 7 commits into from
May 14, 2021

Conversation

jczaja
Copy link
Contributor

@jczaja jczaja commented May 11, 2021

PR types

Function optimization

PR changes

OPs

Describe

This PR modifies softmax grad oneDNN kernel so its implementation of other oneDNN grad kernels. This is needed for bigger changes that will come in next PRs

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@jczaja jczaja added the Intel label May 11, 2021
Copy link
Contributor

@jakpiase jakpiase left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also lines 18-25 and 30-31 are redundant. Both are getting Tensor and MKLDNNDeviceContext, but in two different ways.

auto* dout = ctx.template Input<Tensor>(framework::GradVarName("Out"));
auto* dx =
auto* out_grad = ctx.template Input<Tensor>(framework::GradVarName("Out"));
auto* in_x_grad =
ctx.template Output<framework::Tensor>(framework::GradVarName("X"));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
ctx.template Output<framework::Tensor>(framework::GradVarName("X"));
ctx.template Output<Tensor>(framework::GradVarName("X"));

Please, stay consistent with the usage of namespaces

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok


auto dims = out_grad->dims(); // input and output share the same shape
const int axis = CanonicalAxis(ctx.Attr<int>("axis"), dims.size());
auto softmax_tz = paddle::framework::vectorize<int64_t>(dims);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
auto softmax_tz = paddle::framework::vectorize<int64_t>(dims);
auto softmax_tz = framework::vectorize<int64_t>(dims);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

SoftmaxMKLDNNHandler(const std::vector<int64_t>& dims,
const MKLDNNMemoryFormat fmt,
const MKLDNNMemoryFormat diff_fmt, const int& axis,
SoftmaxMKLDNNHandler(const paddle::framework::ExecutionContext& ctx,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
SoftmaxMKLDNNHandler(const paddle::framework::ExecutionContext& ctx,
SoftmaxMKLDNNHandler(const framework::ExecutionContext& ctx,

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

SoftmaxMKLDNNHandler(const std::vector<int64_t>& dims,
const MKLDNNMemoryFormat fmt,
const MKLDNNMemoryFormat diff_fmt, const int& axis,
SoftmaxMKLDNNHandler(const paddle::framework::ExecutionContext& ctx,
const platform::MKLDNNDeviceContext& dev_ctx,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
const platform::MKLDNNDeviceContext& dev_ctx,
const MKLDNNDeviceContext& dev_ctx,

Since you have using "paddle::platform::MKLDNNDeviceContext;" in line 33, you don't need to declare this namespace here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch

const int axis = CanonicalAxis(ctx.Attr<int>("axis"), dims.size());
auto softmax_tz = paddle::framework::vectorize<int64_t>(dims);

auto data_softmax_md = platform::MKLDNNMemDesc(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
auto data_softmax_md = platform::MKLDNNMemDesc(
auto data_softmax_md = MKLDNNMemDesc(

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok


auto data_softmax_md = platform::MKLDNNMemDesc(
softmax_tz, platform::MKLDNNGetDataType<T>(), out->format());
auto diff_softmax_md = platform::MKLDNNMemDesc(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
auto diff_softmax_md = platform::MKLDNNMemDesc(
auto diff_softmax_md = MKLDNNMemDesc(

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

@jakpiase jakpiase self-requested a review May 13, 2021 16:54
Copy link
Contributor

@jakpiase jakpiase left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@lidanqing-intel lidanqing-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

auto softmax_tz = framework::vectorize<int64_t>(dims);

auto data_softmax_md = MKLDNNMemDesc(
softmax_tz, platform::MKLDNNGetDataType<T>(), out->format());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have one doubt:
out->format and out_grad->format will be "NHWC" or "NCHW"? which is Paddle format. In this case if next op is also mkldnn op, is reorder needed ?

@jczaja
Copy link
Contributor Author

jczaja commented May 14, 2021

@luotao1 Could you please start your review?

@luotao1 luotao1 merged commit 479689f into PaddlePaddle:develop May 14, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants