-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Overhaul shape inference for custom ops #161
Comments
Hey,
One problem I currently encountered is that, PyOp does not support attributes that are lists. Or at least I did not figure out how to do it. This is problem for the Here is a small example of how onnx_op works: |
I have some follow up comments on this issue. Firstly, I can confirm that onnxruntime-extensions currently only supports float32, int64 and string attributes. So to persue further option 3, I see these options:
|
Details
Currently, QONNX custom ops need to implement the
make_shape_compatible_op
function for shape inference. This function is supposed to return a single, standard ONNX node which has the desired shape inference behavior as the custom op. Finding a single-node shape inference equivalent can be challenging if the custom op has non-trivial shape inference behavior. Several custom ops overcome this by assuming that their input shape is available, computing the desired output shape, and using themake_const_shape_op
.However, this requires that the shapes for the inputs to all the custom op are already specified. In cases where they are not, bugs arise e.g. #152 works around one such bug. We should have a more flexible custom op shape inference system.
New behavior
There are several paths forward which should be evaluated in more detail.
InferShapes
for replacing custom nodes with single standard nodes will also need an overhaul, replacing a single node with a subgraph and then back again.make_shape_compatible_op
single-node interface, and keep the assumptions about input shape being available. ReworkInferShapes
to replace custom ops one at a time in topological order and calling ONNX shape inference at each step, instead of replacing all at the same time before calling ONNX shape inference.PyOp
inonnxruntime-extensions
to switch out a larger portion of how QONNX handles custom ops. This goes beyond just rehauling shape inference, but may have other benefits. See https://github.com/microsoft/onnxruntime-extensions/blob/main/tutorials/pytorch_custom_ops_tutorial.ipynbMotivation
Avoid shape inference related bugs for custom ops.
Parts of QONNX being affected
Depends on the approach chosen.
The text was updated successfully, but these errors were encountered: