-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove reference tensor creation in producer tensor indexing path #1750
Conversation
…en/pytorch into gmem_consumer_cleanup
…een/pytorch into nonglobal_consumer_index
…rofeen/pytorch into producer_indexing_refactor
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@@ -151,6 +165,32 @@ IndexingParameters getGlobalIndexParameters( | |||
loop_indexing.loopDomains(), | |||
index_parameters.initial_concrete_id_index); | |||
|
|||
// Setup double buffer increment for producer case: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't remember the double buffering part of indexing. @naoyam could you double check the double buffering portions of the changes in this PR stack.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just incrementing producer index by one on the main stage of double buffer loop, if the consumer is double buffered, i.e. prefetching. We can now make the logic more explicit.
Will do that in a follow up as the merged PR is already getting huge.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's right. double_buffer_loop
is a loop that's double buffered in the current loop nest. We can just skip the rest if it's nullptr.
Fixes #ISSUE_NUMBER