-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
inference: stop re-converging worlds after optimization #38820
Conversation
The validity did not change, so we should not need to update it. This also ensures we copy over all result information earlier, so we can destroy the InferenceState slightly sooner, and slightly cleaner data flow.
LGTM in general, but I had earlier tried a change like this and saw measureable regressions in sysimg build time? Does that happen here also? |
end | ||
if last(valid_worlds) == typemax(UInt) | ||
if doopt && last(valid_worlds) == typemax(UInt) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not move this up into the previous loop at this point?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This step might require taking a lock (since we're updating a few different caches, that all need to happen at once), so I like having them appear separately, though it's not essential.
I instrumented this to show that it never made a difference, while building the system image, on the world ages that we cached. Seems to also run at about the same speed. |
The validity did not change, so we should not need to update it. This also ensures we copy over all result information earlier, so we can destroy the InferenceState slightly sooner, and slightly cleaner data flow. (cherry picked from commit 8c01444)
The validity did not change, so we should not need to update it. This also ensures we copy over all result information earlier, so we can destroy the InferenceState slightly sooner, and slightly cleaner data flow. (cherry picked from commit 8c01444)
The validity did not change, so we should not need to update it. This also ensures we copy over all result information earlier, so we can destroy the InferenceState slightly sooner, and slightly cleaner data flow. (cherry picked from commit 8c01444)
The validity did not change, so we should not need to update it. This also ensures we copy over all result information earlier, so we can destroy the InferenceState slightly sooner, and slightly cleaner data flow. (cherry picked from commit 8c01444)
The validity did not change, so we should not need to update it. This
also ensures we copy over all result information earlier, so we can
destroy the InferenceState slightly sooner, and slightly cleaner data flow.
This is preparatory work for further reorganization of how optimization choices are ordered (#38231).