-
Notifications
You must be signed in to change notification settings - Fork 72
[Rewriter] Add optimizer to fold Pad operators into Conv #2363
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
4b9b69b
to
19b0418
Compare
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2363 +/- ##
==========================================
+ Coverage 69.81% 70.11% +0.30%
==========================================
Files 209 211 +2
Lines 25323 25590 +267
Branches 2529 2565 +36
==========================================
+ Hits 17678 17943 +265
+ Misses 6770 6766 -4
- Partials 875 881 +6 ☔ View full report in Codecov by Sentry. |
19b0418
to
1604446
Compare
Push force rebasing on main and fixing conflicts. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@justinchuby I forgot to explain in the previous message that the changes were not ready yet (just fixing rebase with main).
In the last commits I update the code with all the suggestions.
Now this work is ready to be reviewed.
Could you update the PR title and description? Thanks |
@justinchuby is ok the new description and title ? |
0b166c1
to
7f7d17e
Compare
Last push force rebasing on main and addig @justinchuby suggestions |
Convert 'auto_pad' attribute into a list of explicit pads.
Fix silent bugs
7f7d17e
to
8fc4326
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Last push force rebasing on upstream/main and with @gramalingam suggestions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces a new optimization to fuse Pad operators into subsequent Conv operators to reduce the number of operations in ONNX models. It adds pattern matching rules to convert sequences like Conv(Pad(x))
and ConvInteger(Pad(x))
into optimized Conv(x)
and ConvInteger(x)
operations respectively.
Key changes include:
- Implementation of pattern matching and rewriting logic for Pad-Conv fusion
- Addition of padding format normalization to convert
auto_pad
attributes to explicit padding lists - Integration of the new rule set into the default optimization pipeline
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.
File | Description |
---|---|
onnxscript/rewriter/fuse_pad_into_conv.py | Core implementation with pattern matching classes and rewrite logic for fusing Pad into Conv operations |
onnxscript/rewriter/fuse_pad_into_conv_test.py | Comprehensive test suite covering fusion scenarios, edge cases, and normalization functionality |
onnxscript/rewriter/init.py | Integration of the new rule set into the default optimization rules |
# Conv constraints: inputs/outputs | ||
input_shape = conv_node.inputs[0].shape | ||
output_shape = conv_node.outputs[0].shape | ||
if len(input_shape) <= 2: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens when input_shape/output_shape is None? Do we need to handle it?
Following (#2301),
fuse_pad_into_conv
rule set is introduced to reduce the following list of operators:Additionally,
NormalizePadFormat
is introduced in order to changeauto_pads
Conv attribute in its explicitpads
list (ref: https://onnx.ai/onnx/operators/onnx__Conv.html).