-
Notifications
You must be signed in to change notification settings - Fork 6.8k
[v1.x] ONNX Supoort for MXNet reverse op #19737
Conversation
Hey @Zha0q1 , Thanks for submitting the PR
CI supported jobs: [website, sanity, edge, windows-cpu, unix-gpu, centos-gpu, unix-cpu, windows-gpu, clang, miscellaneous, centos-cpu] Note: |
# Transpose takes perm as a parameter, so we must 'pad' the input to a known dim (10 here) | ||
perm = [i for i in range(10)] | ||
perm[0], perm[axis] = axis, 0 | ||
print(perm) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove debug?
axis = int(attrs.get('axis', 0)) | ||
|
||
# Transpose takes perm as a parameter, so we must 'pad' the input to a known dim (10 here) | ||
perm = [i for i in range(10)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what happens if 10 is not enough?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think in mxnet 10-d is the largest you can get
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just checked: it seems we can create > 10-d tensors, but I think many ops do not support >= 10d tensors and in general there is no use case for high-dimensional tensors
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, thanks for the explanation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
There is no direct mapping of reverse in the onnx op set so I had to mimic the behavior by
The performance is not great most likely, but functionally i got it to behave the same as mxnet reverse