-
Notifications
You must be signed in to change notification settings - Fork 7k
Port all C++ ops to use the dispatcher #2796
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
With the current master, it now seems that the torchvision ops are no longer registered when building/using the c++ side of torchvision. I think it's related to these dispatcher changes. Is this expected? My current fix is to manually add |
Hi @bmanga I was just chatting with @vfdev-5 about this. Indeed it seems that once we started using the dispatcher for NMS and RoIAlign I don't think this is expected, and we should fix it. @ezyang it looks like since we added the Dispatcher to RoIAlign, C++ users of torchvision started having issues using RoIAlign, as it wouldn't be linked to the binary anymore. Do you know what it might be? |
@fmassa the problem seems to be that the macro |
@bmanga could you please detail your solution, does it still use |
@vfdev-5 yes those are kept. I just make sure that That PR was from before |
To reference the issue from last time directly, it was #2134 |
🚀 Feature
We currently manually handle CPU / CUDA / autograd dispatches in our wrapper function. We should instead use the dispatcher from PyTorch, which was built to do exactly that.
The work should closely follow the PR from @ezyang in #2366
Motivation
The dispatcher is a new mechanism in PyTorch that selects which kernel to run depending on properties of the input tensors that were passed. The dispatcher is thus a centralized place where cpu / cuda / autograd / autocast / quantized / xla / etc are handled.
One thing to keep an eye on is that currently we need to duplicate the input checks for both CPU and CUDA functions. This is something that @ezyang is working on in pytorch/pytorch#45277
Current support:
Question for @ezyang : following our discussion in https://github.com/pytorch/vision/pull/2366/files#r447547554 , do you think we should be providing a fallback in PyTorch for registering ops without double backwards?
The text was updated successfully, but these errors were encountered: