Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JIT: import entire method for OSR, prune unneeded parts later #83910

Merged
merged 3 commits into from
Mar 26, 2023

Conversation

AndyAyersMS
Copy link
Member

For OSR compiles, always import from the original entry point in addtion to the OSR entry point. This gives the OSR compiler a chance to see all of the method and so properly compute address exposure, instead of relying on the Tier0
analysis.

Once address exposure has been determined, revoke special protection for the original entry and try and prune away blocks that are no longer needed.

Fixes #83783.

May also fix some of the cases where OSR perf is lagging (though don't expect this to fix them all).

For OSR compiles, always import from the original entry point in addtion
to the OSR entry point. This gives the OSR compiler a chance
to see all of the method and so properly compute address exposure,
instead of relying on the Tier0
analysis.

Once address exposure has been determined, revoke special protection
for the original entry and try and prune away blocks that are no longer
needed.

Fixes dotnet#83783.

May also fix some of the cases where OSR perf is lagging (though
don't expect this to fix them all).
@dotnet-issue-labeler dotnet-issue-labeler bot added the area-CodeGen-coreclr CLR JIT compiler in src/coreclr/src/jit and related components such as SuperPMI label Mar 24, 2023
@ghost ghost assigned AndyAyersMS Mar 24, 2023
@ghost
Copy link

ghost commented Mar 24, 2023

Tagging subscribers to this area: @JulieLeeMSFT, @jakobbotsch, @kunalspathak
See info in area-owners.md if you want to be subscribed.

Issue Details

For OSR compiles, always import from the original entry point in addtion to the OSR entry point. This gives the OSR compiler a chance to see all of the method and so properly compute address exposure, instead of relying on the Tier0
analysis.

Once address exposure has been determined, revoke special protection for the original entry and try and prune away blocks that are no longer needed.

Fixes #83783.

May also fix some of the cases where OSR perf is lagging (though don't expect this to fix them all).

Author: AndyAyersMS
Assignees: -
Labels:

area-CodeGen-coreclr

Milestone: -

@AndyAyersMS
Copy link
Member Author

FYI @jakobbotsch
cc @dotnet/jit-contrib

There are several ways to approach this -- in this version the original entry is unreachable, but we force it to stay around; alternatively, we could make the OSR entry unreachable and force it to stay around, and then swap entry points later.

This one is a bit less disruptive; if it pans out I can also reconcile it with the code where in OSR mode we forcibly import the original entry if we think the method might tail call (and which is why I know this approach will "work", since we already do this extra importation some of the time). For tail call cases we keep the original entry protected until after morph, so a bit later than what we do here. The protection mechanisms are different, so the tail call version still works.

This will be difficult to assess via SPMI as many OSR contexts will fail to replay. But some of them do and the diffs generally look encouraging.

It will also hurt OSR TP but again this may be hard to spot in SPMI. In actuality OSR compiles are pretty rare so I'm not that worried about the extra work.

A third option is to just compile the method normally and then once we're past morph say, rework the control flow to jump to the OSR entry from scratch. That would be more work because I'd have to revise the whole "OSR step block" scheme so that it could run later, in case the OSR entry happens to be in the middle of a bunch of try regions.

Much of what happens before morph is currently flow insensitive, so it doesn't matter that a big swath of blocks are unreachable. It might mess up early liveness, but if so we already have this problem in methods that might tail call.

This may fix some latent OSR perf issues, but I don't expect it to fix them all.

@AndyAyersMS
Copy link
Member Author

Hmm, looks like a fairly persistent assert.

@AndyAyersMS
Copy link
Member Author

Problem is that once you set BBF_DONT_REMOVE it is hard to safely un-set it. So I will just revise and use the existing artificial ref count solution. This will keep the extra IR and blocks around a bit longer but be simpler overall.

@AndyAyersMS
Copy link
Member Author

/azp run runtime-jit-experimental

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@AndyAyersMS
Copy link
Member Author

Test failures seem unrelated. Passed jit-experimental which has various forms of OSR stress.

@AndyAyersMS AndyAyersMS marked this pull request as ready for review March 25, 2023 14:57
Copy link
Member

@jakobbotsch jakobbotsch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! I thought this would be a much more involved change

@@ -334,19 +334,6 @@ void Compiler::lvaInitTypeRef()
{
LclVarDsc* const varDsc = lvaGetDesc(lclNum);
varDsc->lvIsOSRLocal = true;

if (info.compPatchpointInfo->IsExposed(lclNum))
{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we still need to communicate the exposure information in the patchpoint information?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. However, I'll leave this as is for now so older jits can still do somewhat reasonable things with newer SPMI data.

@AndyAyersMS
Copy link
Member Author

LGTM! I thought this would be a much more involved change

Luckily (I guess) we already had this capability for some OSR methods, so now we just use it for all of them.

@AndyAyersMS AndyAyersMS merged commit f1f9fde into dotnet:main Mar 26, 2023
AndyAyersMS added a commit to AndyAyersMS/runtime that referenced this pull request Mar 27, 2023
When I changed the importation strategy for OSR in dotnet#83910 it
exposed a latent issue -- small OSR locals must normalized on load.

Fixes dotnet#83959.
AndyAyersMS added a commit that referenced this pull request Mar 28, 2023
When I changed the importation strategy for OSR in #83910 it
exposed a latent issue -- small OSR locals must normalized on load if
they were exposed at Tier0.

Fixes #83959.
Fixes #83960.
@AndyAyersMS
Copy link
Member Author

Also seeing a regression here (not (yet) autofiled): ubuntu x64

newplot - 2023-03-30T124748 849

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area-CodeGen-coreclr CLR JIT compiler in src/coreclr/src/jit and related components such as SuperPMI
Projects
None yet
Development

Successfully merging this pull request may close these issues.

JIT: OSR is not conservative enough about exposing struct locals
2 participants