-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[7/10] Code generation for Pooling and Fully Connected via CMSIS-NN #9531
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similiar to other one. Seems like this contains contents from FC PR as well ?
tvm::Array<PrimExpr> context_buffer_args = {tir::StringImm(context_buffer_name), | ||
ToArg(context_buffer_size)}; | ||
|
||
scalar_args = tvm::runtime::Concat(context_buffer_args, scalar_args); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest to keep the declaration/initiatlization vectors that get concat closer to the concat.
@manupa-arm I think we should merge this one as this contains both of those changes once I've finally pushed changes addressing your comments. |
If we want to do that please close the other PR and adjust titles and descriptions of this PR to reflect its contents. |
Done |
Change-Id: I67f65e69dfa2c6d31c2ea928d92a827df80bc51a
Change-Id: Ibf22250d961a683208faee362d1960ea266347e8
Change-Id: I0675c3248515c9ddc6f25393a3756f907c98fd46
Change-Id: I2e9ca3edec85991be394bbdb2d1739c3afc62d5e
Change-Id: I38fbde65016713956371fa27dbd2a36f844f5799
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Broadly looks good! There are few minor comments related to clarity which I dont want to block the PR for another CI cycle. Would you be able to address them in a follow up ?
@@ -164,21 +156,15 @@ class RelayToTIRVisitor : public MixedModeMutator { | |||
ToArg(dilation_w), ToArg(dilation_h), ToArg(clip_min), | |||
ToArg(clip_max)}; | |||
|
|||
// cmsis_nn_dims *input_dims (NHWC) | |||
// layout NHWC |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry Ashutosh, I dont still follow what this comment mean. How is this related to the following line ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was to help understand that the following shape is in NHWC layout which will be passed on to CMSIS-NN API eventually.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think might worth saying that explicitly :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ack
|
||
// cmsis_nn_dims *filter_dims (OHWI for Conv2D and IHWO for depthwise) | ||
// OHWI for Conv2D and IHWO for depthwise |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here, where is the information OHWI is used ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above. Maybe I should explicitly say that this is for CMSIS-NN API?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the information is that the following line produces a tensor is that layout and subsequent access assume that ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, comment got displaced in the process of review and new edits 😆
@@ -194,7 +180,7 @@ class RelayToTIRVisitor : public MixedModeMutator { | |||
if (depth_multiplier != -1) { | |||
cmsisnn_api = "arm_depthwise_conv_wrapper_s8"; | |||
Array<PrimExpr> depthwise_filter_shape{1, filter_shape[0], filter_shape[1], out_channels}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to use something like
int kernel_pos_o = kernel_layout.find("O");
instead of the numbers ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, partly done above. Missed here.
ToArg(context_buffer_size)}; | ||
|
||
scalar_args = tvm::runtime::Concat(context_buffer_args, scalar_args); | ||
scalar_args = tvm::runtime::Concat(scalar_args, cmsisnn_input_shape); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Clarity] I think we can bring the declaration "cmsisnn_input_shape" closer to here as it is not used before this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We cannot. Its being used to derive another size in the middle. Also, I would prefer to keep concatenation of arguments at one place. Locating all arguments at same place has helped me with debugs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think Im missing something here :). Isn't it defined in line 291 and is there a use in between ?
Anyway, if it helps debugging, lets have it this way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I was looking at a different operator. For some of the others, context buffer creation requires this shape. Just to be consistent, I didn't want to change the code layout.
|
||
scalar_args = tvm::runtime::Concat(context_buffer_args, scalar_args); | ||
scalar_args = tvm::runtime::Concat(scalar_args, cmsisnn_input_shape); | ||
scalar_args = tvm::runtime::Concat(scalar_args, cmsisnn_filter_shape); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Clarity] I think we can bring the declaration "cmsisnn_filter_shape" closer to here as it is not used before this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above.
scalar_args = tvm::runtime::Concat(context_buffer_args, scalar_args); | ||
scalar_args = tvm::runtime::Concat(scalar_args, cmsisnn_input_shape); | ||
scalar_args = tvm::runtime::Concat(scalar_args, cmsisnn_filter_shape); | ||
scalar_args = tvm::runtime::Concat(scalar_args, bias_shape); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Clarity] I think we can bring the declaration "bias_shape" closer to here as it is not used before this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above.
scalar_args = tvm::runtime::Concat(scalar_args, cmsisnn_input_shape); | ||
scalar_args = tvm::runtime::Concat(scalar_args, cmsisnn_filter_shape); | ||
scalar_args = tvm::runtime::Concat(scalar_args, bias_shape); | ||
scalar_args = tvm::runtime::Concat(scalar_args, cmsisnn_output_shape); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Clarity] I think we can bring the declaration "cmsisnn_output_shape" closer to here as it is not used before this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above.
Thanks for the reviews. |
@manupa-arm Could you please put commit message as "Support for code generation of Maxpool, AvgPool and Fully Connected layers via CMSIS-NN" |
Thanks @ashutosh-arm ! This is merged now. |
…pache#9531) Support for code generation of Maxpool, AvgPool and Fully Connected layers via CMSIS-NN
…pache#9531) Support for code generation of Maxpool, AvgPool and Fully Connected layers via CMSIS-NN
…pache#9531) Support for code generation of Maxpool, AvgPool and Fully Connected layers via CMSIS-NN
…pache#9531) Support for code generation of Maxpool, AvgPool and Fully Connected layers via CMSIS-NN
…pache#9531) Support for code generation of Maxpool, AvgPool and Fully Connected layers via CMSIS-NN
Support for Maxpool and AvgPool via CMSIS-NN.
Code generation for fully connected layer via CMSIS-NN
This PR depends on #9456.
Tracking issue #8646.