-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide a cache for Chalk RecursiveSolver #11667
Conversation
We can't reuse the cache like this because 1. it will give wrong results when the code changes (e.g. if you remove an impl, the cache might still contain results that use this impl), and 2. it will hide dependencies for the trait_solve query (e.g. solving a certain query would look up trait X, but that subgoal is already cached so it doesn't, so we don't see that dependency and don't recalculate the query when that trait changes). (We actually used to reuse the solver with a Salsa query that recreated it whenever the database changed to get around this, but this resulted in worse performance, IIRC because it made us recalculate trait queries that didn't actually need to be recalculated all the time.) |
To expand on this a bit:
|
I've looked into this and have worked around this issue by using a different cache based on trait implementations and crate map. This isn't as effective as fine-grained caching, but it works somewhat effectively.
This is the main reason I am attempting to implement additional caching. Even when I ask for completion multiple times for the same code, some time is spent in |
This is the profiling after the cache was warmed up:
This testing was done with limits increased/disabled, and was for calling a method ( The main functions taking time are |
This does not solve the second problem, of missing dependencies for the trait_solve query. Also, the impls in one crate and the crate graph are by far not the only things that can affect the results of trait solving. Getting this correct would be extremely hard and very error-prone.
I have one theory for why this might actually make things faster: If we do indeed spend a lot of time validating dependencies for As a more general note: Salsa is our main way of caching. Adding caches outside of Salsa, like this one for example, is mostly not a good idea because it is very hard to integrate them correctly, and it is very hard to test that they are integrated correctly because bugs will only show up when you change the code in specific ways. |
Also, I'm not sure how useful it is to disable the flyimports when testing this. Allowing flyimports without prefix is not really a goal right now, and making that faster will not necessarily make the normal completions or even flyimports with prefix faster. |
I believe this is how IntelliJ Rust works. First, they calculate the simple completions and show them. Afterward, they calculate all possible completions asynchronously. However, I don't think this is possible with current LSP. |
Closing this for the time being as this doesn't seem to be the right way to solve the problem here. |
I've tested this with
DEFAULT_QUERY_SEARCH_LIMIT
set to 400000, and with automatic imports enabled without a minimum length. The first completion request takes around 1000ms, but repeating the same completion takes around 100ms using the cache.One problem with this implementation is using
ChalkCache
as a wrapper.ChalkCache
containsArc<Cache<...>>
, andCache
itself containsArc<...>
, resulting in 2 nestedArc
s. This doesn't seem like a major concern, though.Also, I'm not sure how to structure this. For one, calling
chalk_cache
seems to bounce back and forth between different files.#7542