-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC] Asymmetric padding for convolution #2682
Comments
also @Huyuwei to comment. I think it could avoid extra pad op and have better performance. |
@Rasterer The proposal sounds good to me. It will definitely make converting conv easier. I assume we can make it backward compatible by following the padding convention in pooling (1 value, or 2 values, or 4 values)? |
@Rasterer Any update about this? |
I'm afraid I can not commit PR due to open source policy change in my working company. Welcome for code contribution if you are interesting in this. In addition to my original proposal, I think padding performance can be improved further by changing fusion pattern. So far conv padding is inlined into the conv op, thus the condition switch cost in padding will happen more than once for each input element(due to repeated input access in conv). This can be avoided by fusing padding to the preceding op, and padding can be implemented by some offset output writing. |
@FrozenGene, I'd like to try to implement this. I am thinking if I could follow @Huyuwei proposal by doing this with backward compatible way. |
Yes. We should keep the backward compatibility. |
close by #4511 |
Similar to #1346, conv also has asymmetric padding cases, especially in frameworks with 'SAME‘ padding support. But adding this support will have much more impact on current code than pooling ops, so we file this RFC to discuss its necessity and implementation details.
Necessity
In current tensorflow frontend, conv padding is supported by adding a separate pad nnvm op. Consequently, the separate pad op can not be fused to conv, which leads to imperfect performance. Supporting asymmetric padding natively in relay/nnvm can solve this problem.
Breakdown
We propose to support it in 3 steps:
Add asymmetric padding(pad_top, pad_left, pad_bottom, pad_right) support for conv/deconv in relay/nnvm and generic topi.
Modify _get_workload behavior to accomodate the new 4-number padding style, and change backend-specific topi schedule respectively.
Add support in tensorflow/keras/coreml frontends: eliminate separate pad and use attr['padding'] instead
@srkreddy1238 @FrozenGene
The text was updated successfully, but these errors were encountered: