-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TOPI] Fix mali conv2d performance regression #3131
Conversation
merrymercy
commented
May 2, 2019
- Fix the performance regression on mali [PERF] Performance Regression on Mali GPU #3088
- Fix tophub for mali after modifying the argument of dense ([Relay, Quantization, TOPI] int8 dense on CUDA & Dense op quantization #2877)
+ tvm.const(0, out_dtype) * M[alpha-1][alpha-1][CO-1][P_round-1], | ||
# thw following hack term is used to make the padding in batch gemm ("M") | ||
# effective, otherwise the padding will be eliminated by bound inference | ||
+ tvm.expr.Mul(tvm.const(0, out_dtype), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest to leave a comment point to the issue #3088 so ppl understand why Mul
instead of *
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still confused at why we need this multiplication?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thw -> the
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@icemelon9 During batch gemm, we introduce some padding to avoid partial tile, so we can safely vectorize the innermost loop. However, we won't use all the output of batch gemm (the padded part is ignored in final results). The InferBound pass in tvm analyzes computation region from output to input, and only keeps the necessary part. If we don't add this term, the padding added in batch gemm will be ignored, regardless of how we tweak the shape argument in tvm.compute
.
This term accesses the last element in the padded buffer, so it makes all padding effective.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yzliu tvm.expr.Mul
won't do const fold, while *
is equal to tvm.expr.Mul
+ const fold.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you elaborate in the comment by what you replied to @icemelon9 ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's too long to be put in the comment..
Thanks @merrymercy @tqchen @eqy @icemelon9 for fixing and reviewing. |
* [TOPI] fix mali conv * fix typo * address comments
* [TOPI] fix mali conv * fix typo * address comments