Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regarding the dynamism for custom op in ONNXRT #269

Open
Darshvino opened this issue Jul 18, 2022 · 0 comments
Open

Regarding the dynamism for custom op in ONNXRT #269

Darshvino opened this issue Jul 18, 2022 · 0 comments
Labels

Comments

@Darshvino
Copy link

Hi ONNXRT team,

I implemented one custom op in ONNXRT and was able to run it correctly with the correct results.

Having said that, I implemented the multiple version of the kernel wrt multiple shapes (currently I have implemented 4 versions for 4 different input heights). So, I have to run it separately for a different version. Hence, when I want to run it with multiple ops(model) at once, I am facing difficulty to make the custom op dynamic, is there any way that I can make it dynamic??

I have made it different version with if-else conditions in this function:

void* CreateKernel(Ort::CustomOpApi api, const OrtKernelInfo* info) { return new GroupNormKernel<float>(api, info); };
to run for different height.

struct CustomOp
    : Ort::CustomOpBase<CustomOp, Kernel<int64_t>>
{

private:
  std::string implem;
  unsigned int ih;

public:
  CustomOp(std::string implem, unsigned int ih) : implem(implem), ih(ih) {}
  void *CreateKernel(OrtApi api, const OrtKernelInfo *info) const
  {
    if (ih == 54)
    {
        return new Kernel_1<int64_t>(api, info);  
     }
    else if(ih == 50)
    {
        return new Kernel_2<int64_t>(api, info);  

    } 
    .....

}    

So, whenever I want to run for a particular dims, I will pass the args here:

as CustomOp custom_op(implem, ih), implem is in my control, so no worries about that, but ih is dependent on the height of input tensor.

So, the main thing I want to do here is to execute the custom op dynamically based on the height of the input tensor.

I have referred to this tutorial for adding the custom op in ONNXRT: https://github.com/onnx/tutorials/tree/master/PyTorchCustomOperator

Look forward to your reply

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant