Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support of conv/linear oneDNN param cache for TorchInductor #7

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

Guobing-Chen
Copy link
Owner

This PR aims to provide oneDNN param cache support for conv/linear OPs in TorchInductor.

Additional OPs are added in pytorch to provide conv/linear param generation based on input tensors and other parameters, which will return a param handler that been stored into a param cache sitting in TorchInductor generated sub-graph code. Conv/linear OPs in the generated sub-graph code will then query the cache to get the param and use it directly instead of initializing a conv param everytime been invoked.

TorchDynamo will guard the input shape change and invoke TorchInductor to re-generate new sub-graph code which will then include a new param cache for conv/linears with new shape.

conv+unary is supported currently, while conv+binary not fully yet as that need latest ideep which is under merging.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant