Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Add reshape op supported by MKL-DNN #12980

Merged
merged 13 commits into from
Dec 13, 2018

Conversation

huangzhiyuan
Copy link
Contributor

@huangzhiyuan huangzhiyuan commented Oct 26, 2018

Description

This PR indents to add reshape op supported by MKL-DNN. It will help reduce the unnecessary reorder operations after a MKL-DNN operation. @pengzhao-intel @ZhennanQin

Test network:

data = mx.symbol.Variable('data')
conv1 = mx.symbol.Convolution(data=data, num_filter=16384, kernel=(1, 1), pad=(0, 0), stride=(1, 1))
res = mx.symbol.reshape(data=conv1, shape=(16384,4,16,16))
trans = mx.symbol.transpose(data=res)
res1 = mx.symbol.reshape(data=trans, shape=(1, 1024, 1, 16384))
conv2 = mx.symbol.Convolution(data=res1, num_filter=16384, kernel=(1, 1), pad=(0,0), stride=(1,1))

Origin verbose log:

mkldnn_verbose,exec,convolution,jit:avx512_common,forward_inference,fsrc:nchw fwei:Ohwi16o fbia:x fdst:nChw16c,alg:convolution_direct,mb4_g1ic3oc16384_ih16oh16kh1sh1dh0ph0_iw16ow16kw1sw1dw0pw0,2.08691
mkldnn_verbose,exec,reorder,jit:uni,undef,in:f32_nChw16c out:f32_nchw,num:1,4x16384x16x16,2.44995
mkldnn_verbose,exec,reorder,jit:uni,undef,in:f32_nchw out:f32_nChw16c,num:1,4x16384x16x16,1.64697
mkldnn_verbose,exec,reorder,jit:uni,undef,in:f32_nChw16c out:f32_nchw,num:1,4x16384x16x16,2.30298
mkldnn_verbose,exec,reorder,simple:any,undef,in:f32_nchw out:f32_nchw,num:1,16384x4x16x16,2.1731
mkldnn_verbose,exec,reorder,jit:uni,undef,in:f32_nchw out:f32_nChw16c,num:1,1x1024x1x16384,2.40698
mkldnn_verbose,exec,reorder,jit:uni,undef,in:f32_oihw out:f32_OIhw16i16o,num:1,16384x1024x1x1,2.29297
mkldnn_verbose,exec,convolution,jit_1x1:avx512_common,forward_inference,fsrc:nChw16c fwei:OIhw16i16o fbia:x fdst:nChw16c,alg:convolution_direct,mb1_g1ic1024oc16384_ih1oh1kh1sh1dh0ph0_iw16384ow16384kw1sw1dw0pw0,457.164

After reshape op supported by MKL-DNN verbose log:

mkldnn_verbose,exec,convolution,jit:avx512_common,forward_inference,fsrc:nchw fwei:Ohwi16o fbia:x fdst:nChw16c,alg:convolution_direct,mb4_g1ic3oc16384_ih16oh16kh1sh1dh0ph0_iw16ow16kw1sw1dw0pw0,1.86304
mkldnn_verbose,exec,reorder,simple:any,undef,in:f32_nchw16c out:f32_nchw,num:1,4x16384x16x16,2.35913
mkldnn_verbose,exec,reorder,simple:any,undef,in:f32_nchw out:f32_nchw,num:1,16x16x4x16384,1.55103
mkldnn_verbose,exec,reorder,jit:uni,undef,in:f32_nchw out:f32_nChw16c,num:1,1x1024x1x16384,2.6958
mkldnn_verbose,exec,reorder,jit:uni,undef,in:f32_oihw out:f32_OIhw16i16o,num:1,16384x1024x1x1,2.49683
mkldnn_verbose,exec,convolution,jit_1x1:avx512_common,forward_inference,fsrc:nChw16c fwei:OIhw16i16o fbia:x fdst:nChw16c,alg:convolution_direct,mb1_g1ic1024oc16384_ih1oh1kh1sh1dh0ph0_iw16384ow16384kw1sw1dw0pw0,432.984

Comments

  • If this change is a backward incompatible change, why must this change be made.
  • Interesting edge cases to note here

@ankkhedia
Copy link
Contributor

@huangzhiyuan Thanks for the contribution!
Could you please make sure that the changes in the PR passes CI
@mxnet-label-bot [pr-awaiting-testing , MKLDNN, operator]

@marcoabreu marcoabreu added MKLDNN Operator pr-awaiting-testing PR is reviewed and waiting CI build and test labels Oct 29, 2018
@ankkhedia
Copy link
Contributor

@huangzhiyuan It seems that the PR is still failing on CI. Request you to take a look.

@huangzhiyuan
Copy link
Contributor Author

@ankkhedia please take a view, ths :)

@ankkhedia
Copy link
Contributor

@azai91 @mseth10 Could you please take a look into this PR?

@pengzhao-intel
Copy link
Contributor

ping @zheng-da @azai91 @mseth10

@kalyc
Copy link
Contributor

kalyc commented Nov 13, 2018

@mxnet-label-bot update [pr-awaiting-review]

@marcoabreu marcoabreu added pr-awaiting-review PR is waiting for code review and removed MKLDNN Operator pr-awaiting-testing PR is reviewed and waiting CI build and test labels Nov 13, 2018
@stu1130
Copy link
Contributor

stu1130 commented Nov 21, 2018

ping @azai91 @mseth10 to review

@vandanavk
Copy link
Contributor

@mxnet-label-bot add [MKLDNN]

return;
}
FallBackCompute(UnaryOp::IdentityCompute<cpu>, attrs, ctx, inputs, req,
outputs);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indent.

@@ -261,7 +317,9 @@ Example::
.set_num_outputs(1)
.set_attr<nnvm::FInferShape>("FInferShape", FlattenShape)
.set_attr<nnvm::FInferType>("FInferType", ElemwiseType<1, 1>)
#if MXNET_USE_MKLDNN == 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If MKLDNN supports this op, please add .set_attr<bool>("TIsMKLDNN", true) for it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This attribute is in Line 327.

Copy link
Contributor

@ZhennanQin ZhennanQin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only some slight comments. Overall LGTM.

@huangzhiyuan
Copy link
Contributor Author

@azai91 @mseth10 could you help review the PR? This has been out for review for a long long time...

@@ -210,24 +272,18 @@ static void FlattenEx(const nnvm::NodeAttrs& attrs,
#endif
}

#if MXNET_USE_MKLDNN == 1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems this will change the original behavior when MKL-DNN is not enabled. FlattenStorageType is not defined then.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is followed new style with other mkldnn op, that is, defining InferStorageType function within MKLDNN macro. Most other ops are refactored into this style by Luobao, I guess this one is missing.

@@ -914,7 +972,7 @@ NNVM_REGISTER_OP(depth_to_space)
.describe(R"code(Rearranges(permutes) data from depth into blocks of spatial data.
Similar to ONNX DepthToSpace operator:
https://github.com/onnx/onnx/blob/master/docs/Operators.md#DepthToSpace.
The output is a new tensor where the values from depth dimension are moved in spatial blocks
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for fixing these white spaces at line-end. Does cpplint complain about them? If not, I would like to keep them as is since these changes are not related to this PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have fixed the white space same as before, could you help review again? thanks :)

@TaoLv
Copy link
Member

TaoLv commented Dec 7, 2018

Is there any unit test to cover this change?

@huangzhiyuan
Copy link
Contributor Author

@TaoLv @azai91 @mseth10 I've made the change and unit test based on your suggestions. What do you think?

.set_attr<nnvm::FInferType>("FInferType", ElemwiseType<1, 1>)
.set_attr<nnvm::FGradient>("FGradient", ElemwiseGradUseNone{"_backward_copy"})
.set_attr<FCompute>("FCompute<cpu>", UnaryOp::IdentityCompute<cpu>)
#if MXNET_USE_MKLDNN == 1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know that this is not the first op which is implemented this way, but is there a reason for the two different if blocks here ?

Copy link
Contributor Author

@huangzhiyuan huangzhiyuan Dec 11, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nothing different, it has been merged into one block here. Thanks for your review! :)

Copy link
Contributor

@mseth10 mseth10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@anirudh2290 anirudh2290 merged commit c4a619c into apache:master Dec 13, 2018
@huangzhiyuan huangzhiyuan deleted the mkldnn_reshape branch December 14, 2018 01:04
juliusshufan added a commit to juliusshufan/incubator-mxnet that referenced this pull request Apr 21, 2019
This reverts commit c4a619c.

Conflicts:
	src/operator/tensor/matrix_op.cc
	tests/python/mkl/test_mkldnn.py
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
MKLDNN pr-awaiting-review PR is waiting for code review
Projects
None yet
Development

Successfully merging this pull request may close these issues.