-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JIT: Improve weight distribution in internal blocks produced by casts #50082
Conversation
@AndyAyersMS PTAL |
With PGO we still won't know the bias of the internal branches currently, so the conditional block weights will still be guesses. For the castclass cases we should guess the cast will succeed. For isinst it's not as clear what the right guesses are. We could consider adding probes to capture the outcome of casts so we could properly weight these blocks. I am not sure if it is worth the trouble. |
@AndyAyersMS
We usually expand them like this:
if (obj == null)
return null;
else if (obj->pMT == targetHandle)
return obj;
else
return CORINFO_HELP_CHKCASTCLASS_SPECIAL(); // for throwing InvalidCastException the last block is cold, the first two are 50/50 in the general case, I don't think we can safely assume "==null" is colder than the second block.
if (obj == null)
return null;
else
return CORINFO_HELP_CHKCASTCLASS_SPECIAL(); In this case both are 50/50 |
We are headed towards a model where the jit acts as if it always has consistent profile data. As part of that, each flow transformation needs to update the block weights and edge likelihoods on the spot, so that consistency is preserved (if possible) and the new block and edge weights reflect our best understanding of what will happen at runtime. Also, any weight derived from a PGO weight should be considered as a PGO weight. So the existing code had the right idea in doing those profile updates. We should not defer them until later. The question is then whether those updates preserve consistency and best reflect reality. For a transformation like this where we aren't measuring the actual likelihoods (at least not yet, and maybe not ever) then we need to plug in our best guesses as to what is likely. My guess is that user code doesn't cast null pointers very often so the first test should be biased towards doing the second test. I'm not as sure about the second test unless the outcome of a failed compare is a throw. We can actually set up a fixture to measure these outcomes if you think it would better inform the weight setting. It would be a one-off thing that we would require special profiling runs to capture. |
@AndyAyersMS @dotnet/jit-contrib PTAL |
src/coreclr/jit/morph.cpp
Outdated
// Currently, we don't instrument internal blocks, so the only way we can set weights to these blocks | ||
// is to analyze successors and take a guess. | ||
float cond2BlockWeight = 0; | ||
if (currBbWeight > 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs a comment explaining why the computation makes sense.
I don't think values of 100 and 50 should enter in directly (the scaling should be based entirely on the weights from neighboring blocks and perhaps some ad-hoc guesses which you should call out explicitly).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added some comments around (covers 50 value)
Will get back to it after #50490 |
Fixes #50030
Codegen diff: https://www.diffchecker.com/MEyYf7UH
So we just let
optSetBlockWeights
to handle weights.jit-diff: