Skip to content

inference performance: drop some unnecessary edges, cache more #29795

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Oct 29, 2018

Conversation

vtjnash
Copy link
Member

@vtjnash vtjnash commented Oct 24, 2018

As noted by Simon in analyzing #27874, the compiler performance can become unstable in the presence of inference-limiting recursion. This is normally rare in most code, but is actually extremely common in the IO show code. Now that we have fully linearized IR, we can easily detect some of these cases and avoid some computational work and rework.

Note that backedges can't represent the full suite of information that was contained in this forward edge, so we're still forced to make an over-approximation (until we switch that design).

When the return value of the call is statically unused,
we can discard the result earlier and avoid involving it in cycle
resolution or backedge computation (since it has no effect).
We typically expect that all members in a cycle have the same attributes.
It may not be necessary, but that is how we record and track the items in a cycle.
@KristofferC
Copy link
Member

As noted by Simon in analyzing #27874

:(

@vtjnash
Copy link
Member Author

vtjnash commented Oct 25, 2018

Oops, sorry Kristoffer. I have no idea why I implicated some entity named Simon, when you did all of the work.

@martinholters
Copy link
Member

Kristoffer "Simon" Carlsson. I like that.

@KristofferC
Copy link
Member

Why is this backportable but something like #29551 is not?

@StefanKarpinski
Copy link
Member

Since no reason was ever given for not backporting #29551, I've marked it for backport.

@vtjnash
Copy link
Member Author

vtjnash commented Oct 25, 2018

Actually, I thought you had already marked that for backporting, and was just following suit here.

@vtjnash vtjnash merged commit cd0bd33 into master Oct 29, 2018
@vtjnash vtjnash deleted the jn/inference-recursion-limits branch October 29, 2018 17:43
@KristofferC
Copy link
Member

This caused the regressions shown in #29444 (comment). Dropping from backporting.

@vtjnash
Copy link
Member Author

vtjnash commented Oct 31, 2018

@nanosoldier runbenchmarks(ALL, vs = "@44f2563d11f3a6fa56e9a059a8349df53cdc8eb8")

vtjnash added a commit that referenced this pull request Oct 31, 2018
vtjnash added a commit that referenced this pull request Oct 31, 2018
@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @ararslan

vtjnash added a commit that referenced this pull request Nov 2, 2018
@c42f c42f mentioned this pull request Nov 8, 2018
tkf pushed a commit to tkf/julia that referenced this pull request Nov 21, 2018
KristofferC pushed a commit that referenced this pull request Nov 28, 2018
KristofferC pushed a commit that referenced this pull request Dec 12, 2018
KristofferC pushed a commit that referenced this pull request Feb 11, 2019
KristofferC pushed a commit that referenced this pull request Feb 20, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler:inference Type inference performance Must go faster
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants