You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I implemented one custom op in ONNXRT and was able to run it correctly with the correct results.
Having said that, I implemented the multiple version of the kernel wrt multiple shapes (currently I have implemented 4 versions for 4 different input heights). So, I have to run it separately for a different version. Hence, when I want to run it with multiple ops(model) at once, I am facing difficulty to make the custom op dynamic, is there any way that I can make it dynamic??
I have made it different version with if-else conditions in this function:
Hi ONNXRT team,
I implemented one custom op in ONNXRT and was able to run it correctly with the correct results.
Having said that, I implemented the multiple version of the kernel wrt multiple shapes (currently I have implemented 4 versions for 4 different input heights). So, I have to run it separately for a different version. Hence, when I want to run it with multiple ops(model) at once, I am facing difficulty to make the custom op dynamic, is there any way that I can make it dynamic??
I have made it different version with
if-else
conditions in this function:tutorials/PyTorchCustomOperator/ort_custom_op/custom_op.h
Line 38 in ae0202e
So, whenever I want to run for a particular dims, I will pass the args here:
tutorials/PyTorchCustomOperator/ort_custom_op/custom_op_test.cc
Line 89 in ae0202e
CustomOp custom_op(implem, ih)
, implem is in my control, so no worries about that, but ih is dependent on the height of input tensor.So, the main thing I want to do here is to execute the custom op dynamically based on the height of the input tensor.
I have referred to this tutorial for adding the custom op in ONNXRT: https://github.com/onnx/tutorials/tree/master/PyTorchCustomOperator
Look forward to your reply
Thanks!
The text was updated successfully, but these errors were encountered: