Optimize dynamo dynamic shape caching #7726
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
When we do
Dynamo will only trace the python bytecode once, but will pass input with different shapes to the same
optimized_mod
. The way we support the dynamic shape is to maintain a mapping between different input shapes to different xla graphs.Currently there is an ineffiency that when we do
we return the
optimized_mod
to pytorch toreplace
the existing function. However in this process we did not populate the map that maps input shapes to the graph hash. We do that when weThis is unnecessary, we should populate the mapping when we first time calling
extract_graph_helper