-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support custom operators cummax and cummin for onnxruntime #1010
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1010 +/- ##
==========================================
- Coverage 64.57% 64.56% -0.02%
==========================================
Files 152 152
Lines 9792 9800 +8
Branches 1779 1780 +1
==========================================
+ Hits 6323 6327 +4
- Misses 3141 3144 +3
- Partials 328 329 +1
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! @ZwwWayne
Hi, this PR tries to implement custom operator
mmcv::cummax
andmmcv::cummin
, which supports to exporttorch.cummax
andtorch.cummin
to onnx format, and run it with onnxruntime.mmcv::cummax
is a more general operation and an extension to previousmmcv::CornerPool
, since the former supports arbitrary tensor shapes while the latter only supports for 4-D input tensor.Based on the customized
mmcv::cummax
andmmcv::CornerPool
operation, CornerNet in MMDet might be exportable to onnx for various pytorch version.