-
Notifications
You must be signed in to change notification settings - Fork 12.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
create a "provisional cache" to restore performance in the case of cycles #61754
Conversation
It does not, in fact, execute in a recursive context.
I tried to propagate this information with the return value, but I found a curiosity (actually, something that I'm not keen on in general) -- in particular, the candidate cache urrently invokes evaluation, which may detect a cycle, but the "depth" that results from that are is easily propagated back out. This probably means that the candidate caching mechanism *itself* is sort of problematic, but I'm choosing to ignore that in favor of a more ambitious rewrite later.
The job Click to expand the log.
I'm a bot! I can only do what humans tell me to, so if this was not helpful or you have suggestions for improvements, please ping or otherwise contact |
So I think I found the bug causing cycle-cache-err-60010.rs to unexpectedly pass -- copy-and-paste error. Working on a fix. UPDATE: Or...not. Not sure what problem is. |
e89738d
to
59f5045
Compare
The job Click to expand the log.
I'm a bot! I can only do what humans tell me to, so if this was not helpful or you have suggestions for improvements, please ping or otherwise contact |
The job Click to expand the log.
I'm a bot! I can only do what humans tell me to, so if this was not helpful or you have suggestions for improvements, please ping or otherwise contact |
I think the problem is fixed, not sure what's up with travis |
/// error. When we pop the node at `reached_depth` from the stack, we | ||
/// can commit all the things that remain in the provisional cache. | ||
struct ProvisionalEvaluationCache<'tcx> { | ||
/// next "depth first number" to issue -- just a counter |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: add (DFN)
after "depth first number"
as a way of informally binding the acronym in a manner that makes it show up when one does a (case-sensitive) search to find such a binding in the docs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(oh I guess it does show up in this "binding" fashion below in the doc for struct ProvisionalEvaluation
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You might also explicitly state that it is "depth-first number" (and not just "depth") is that this identifies where a node falls in the graph when laid out in depth-first order, right?
(I briefly thought "there must be a standard term for this notion that is less awkward, but I didn't see anything better than "DFS number", which does not seem superior to "DFN")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DFN is terminology that I took from various prolog papers -- probably this one:
Efficient Top-Down Computation of Queries Under the Well-formed Semantics (Chen, Swift, and Warren; Journal of Logic Programming '95)
so it's not totally unstandard =)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I'm not sure why i'm tracking both depth and dfn -- I think you only need the DFN.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I just started with depth and had to add DFN later and didn't realize I could simplify it.
put back negative impls to ensure `Span` is not `Send` nor `Sync`
none of my feedback is severe enough to block an r+, so I'll go ahead and mark it so. |
@bors r+ rollup=never |
📌 Commit bb97fe0 has been approved by |
The job Click to expand the log.
I'm a bot! I can only do what humans tell me to, so if this was not helpful or you have suggestions for improvements, please ping or otherwise contact |
Shouldn't this be removed as well? rust/src/libsyntax/tokenstream.rs Lines 52 to 60 in bb97fe0
|
You'll need |
Is that a flaky |
It doesn't look like it should be based on the amount of panics, but I'm not sure. It does use quickcheck which introduces a level of randomness which could lead to it being spurious and/or not related to this PR in particular (cc @BurntSushi as the author). Since I suspect it is unrelated to this PR, let's @bors retry xsv spurious test |
create a "provisional cache" to restore performance in the case of cycles Introduce a "provisional cache" that caches the results of auto trait resolutions but keeps them from entering the *main* cache until everything is ready. This turned out a bit more complex than I hoped, but I don't see another short term fix -- happy to take suggestions! In the meantime, it's very clear we need to rework the trait solver. This resolves the extreme performance slowdown experienced in #60846 -- I plan to add a perf.rust-lang.org regression test to track this. Caveat: I've not run `x.py test` in full yet. r? @pnkfelix cc @arielb1 Fixes #60846
☀️ Test successful - checks-travis, status-appveyor |
Given the number of beta-regressions that we believe were injected by PR #60444 (see #61960), I am beta-nominating this PR. However, I am also a little nervous about back-porting this PR to the current beta, which (if I understand correctly) is scheduled to become the stable release in about two weeks, but his PR itself only landed on nightly 3 days ago. So: Do not immediately assume my nomination is also an endorsement for a backport; I am just trying to initiate discussion of the matter. |
discussed at T-compiler meeting. Given that this just landed in nightly, we're going to give this a week to bake before officially accepting for beta backport. (But our current inclination is to accept.) |
@rust-lang/compiler We need to get beta backports approved a little earlier this cycle as we're promoting beta->stable on Saturday. If we could get an FCP going for beta-accepted on this PR that'd be great; or if someone wants to unilaterally beta accept that works too. |
I'm going to unilaterally beta-accept this. We did talk about the beta-nom of #61754 at the T-compiler meeting last week, and the main misgivings we had were: 1. it didn't seem to resolve all of the performance and overflow issues, 2. it was a bit big+risky for a backport, and 3. it hadn't had much time to bake. But, now it has had time to bake, and (as also stated at the meeting), it fixes a critical issue. So, accepting for beta-backport. |
…caching-perf-3, r=pnkfelix create a "provisional cache" to restore performance in the case of cycles Introduce a "provisional cache" that caches the results of auto trait resolutions but keeps them from entering the *main* cache until everything is ready. This turned out a bit more complex than I hoped, but I don't see another short term fix -- happy to take suggestions! In the meantime, it's very clear we need to rework the trait solver. This resolves the extreme performance slowdown experienced in rust-lang#60846 -- I plan to add a perf.rust-lang.org regression test to track this. Caveat: I've not run `x.py test` in full yet. r? @pnkfelix cc @arielb1 Fixes rust-lang#60846
[beta] Rollup backports Rolled up: * [beta] Comment out dev key #61700 Cherry picked: * Dont ICE on an attempt to use GAT without feature gate #61118 * Fix cfg(test) build for x86_64-fortanix-unknown-sgx #61503 * Handle index out of bound errors during const eval without panic #61598 * Hygienize macros in the standard library #61629 * Fix ICE involving mut references #61947 * rustc_typeck: correctly compute `Substs` for `Res::SelfCtor`. #61896 * Revert "Set test flag when rustdoc is running with --test option" #61199 * create a "provisional cache" to restore performance in the case of cycles #61754 r? @ghost
This should have had the beta-nominated flag removed as part of PR #62065, right? |
Introduce a "provisional cache" that caches the results of auto trait resolutions but keeps them from entering the main cache until everything is ready. This turned out a bit more complex than I hoped, but I don't see another short term fix -- happy to take suggestions! In the meantime, it's very clear we need to rework the trait solver. This resolves the extreme performance slowdown experienced in #60846 -- I plan to add a perf.rust-lang.org regression test to track this.
Caveat: I've not run
x.py test
in full yet.r? @pnkfelix
cc @arielb1
Fixes #60846