-
-
Notifications
You must be signed in to change notification settings - Fork 5.6k
inference performance: drop some unnecessary edges, cache more #29795
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
When the return value of the call is statically unused, we can discard the result earlier and avoid involving it in cycle resolution or backedge computation (since it has no effect).
We typically expect that all members in a cycle have the same attributes. It may not be necessary, but that is how we record and track the items in a cycle.
:( |
Oops, sorry Kristoffer. I have no idea why I implicated some entity named Simon, when you did all of the work. |
Kristoffer "Simon" Carlsson. I like that. |
Why is this backportable but something like #29551 is not? |
Since no reason was ever given for not backporting #29551, I've marked it for backport. |
Actually, I thought you had already marked that for backporting, and was just following suit here. |
This caused the regressions shown in #29444 (comment). Dropping from backporting. |
@nanosoldier |
Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @ararslan |
As noted by Simon in analyzing #27874, the compiler performance can become unstable in the presence of inference-limiting recursion. This is normally rare in most code, but is actually extremely common in the IO show code. Now that we have fully linearized IR, we can easily detect some of these cases and avoid some computational work and rework.
Note that backedges can't represent the full suite of information that was contained in this forward edge, so we're still forced to make an over-approximation (until we switch that design).