-
Notifications
You must be signed in to change notification settings - Fork 240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[PT2] MinMax #3166
base: develop
Are you sure you want to change the base?
[PT2] MinMax #3166
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great job!
It would be great if you would separate the small typo fixes from the min-max implementation, it's too much small changes for me (and probably for others as well)
from nncf.experimental.torch2.function_hook.nncf_graph.nncf_graph_builder import GraphModelWrapper | ||
|
||
|
||
class PT2Engine(Engine): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we reuse PTEngine as it done for the TorchFX backend?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can , but what a reason?
I have try to keep one way dependence torch2 from torch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was asked to remove the PTFXEngine
by @alexsu52, await the same comment from him
def _apply_insertion_transformation( | ||
self, model: nn.Module, transformations: List[PT2InsertionCommand] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think about introduction of transformations as it done for torchFX? https://github.com/openvinotoolkit/nncf/blob/develop/nncf/experimental/torch/fx/model_transformer.py#L195-L208
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't see any advantages in this approach (using function instead of commands), but it makes development and debugging much more difficult.
This discuss should be part of refactoring of ModelTransformer,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I have to ask so be sure PT2 and FX backends are aligned. I'll approve it, but what're the difficulties with transformations? Why closures worse for the debug?
Are conformance tests numbers available? |
(N) - experimental tracing without F/BC |
Changes
Introduce TORCH2 backend
MinMax algorithms for torch2 backend
Add handle_torch_function for quantization function to trace it by torch_function
Related tickets
152996