-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Added strong cache for ILModuleReaders #283
Conversation
src/ProvidedTypes.fs
Outdated
let reader = createReader ilGlobals file | ||
(lastWriteTime, count + 1, reader) | ||
else | ||
(lastWriteTime, count, reader) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When do you remove?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about a MemoryCache with a 30 second expiration?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Timed caches I guess can work, but I still feel a little uncomfortable with it.
@dsyme this isn't quite complete because now this is effectively a memory leak. We need to figure out when we can clear it out and have some sort of life-cycle for it. |
|
||
[<Fact>] | ||
let ``test reader cache actually caches``() = | ||
for i = 1 to 1000 do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test looks great. Do you know the count before the cache is/was fixed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have the actual count as it would be hard to test with a weak reference, but at least the test verifies that we actually do a strong cache.
Yup. But looking good :) |
@TIHan I resolved the merge conflict for you (I hope I didn't make a mistake) |
So here is the result of my testing with a private build of
|
I also looked at the trace - the trace is too long (3 minutes), which meant that we threw out events. Traces need to be less than 30 seconds. @blumu Can you turn up Diagnostic data to "full" in the Diagnostic & Feedback windows setting page? Your machine isn't sending any performance/watson data so cannot see any past crashes, UI or typing delays. |
Nevermind, I found some data through other means, just digging through it. |
@TIHan We don't need a trace to see GC's, if you look at the vs/perf/clrpausemsecs event in the session, it will show GC sizes/times. We will send an event for any GC over 500ms. I don't yet have results for the above session, but I looked at past sessions and GC time, in particular, Gen 2, is absolutely almost all the cause of your delays. The Gen2 + Large Object Heap are all huge, resulting in GC after GC after GC, all causing large delays of upwards of 2 seconds each. When the results are in for the above session in a day, then @TIHan should be able to see if it's still GC that is causing the issue. |
@TIHan We discussed using MemoryCache to allow resources to be reclaimed via a sliding window. Unfortunately System.Runtime.Caching is not in .NET 4.5 nor .NET Standard 2.0, and TP design time (TPDTC) components are nearly always now compiled as one or both of those, to allow them to deploy into most active tooling. So I will just accept this PR. Additionally I have added an off-by-default resource reclaimer that clears the reader table every minute when type providers are in active use. TO enable it set environment variable
to any non-empty value. It is likely the fixes we already have will be enough to reduce memory usage sufficiently |
A cache without an invalidation policy is a memory leak. Visual Studio spans solutions loads, project loads, branch switches all of which could result this in cache growing unbounded. |
The TPSDK should no longer support net45 if we cannot have a proper solution here. |
I don't think a sliding window is a great approach either, it should be tied to the life of something; project data, or TP themselves. |
I tested the fix of @davkean I am not sure if it's relevant, but for us the issue reproes systematically right after launching VS and calling "Find all references" for the first time. Which suggests that caching would help even within the scope of a single build. In other words, even if you were to invalidate the cache on every build that would still resolve the perf issue for us. I can see how this would break incremental builds though. So perhaps there is a better way to scope the cache as you suggested. |
Yes, we should clear here. For example we could implement an expiration policy by starting a background async to clear the cache under some policy. Switching to System.Runtime.MemoryCache, is not feasible as TP components really need to be netstandard2.0 realistically and it's not available there AFAICS.
The problem is we need to artificially extend the lifetime of these objects so they are shared between TP instances. We don't have any larger handle/scope for which to share them |
@davkean I'll arrange for a better fix here. BTW I notice FSharp.Data itself has a bunch of caches too. It would be so good if we could truly load/unload components in isolation... |
This adds a strong cache for ILModuleReaders instead of a weak cache. While we do a File.ReadAllBytes and it goes on the LOH, they will be long lived so it's appropriate. We do include a last write time to invalidate the cache just in case a user changes it; but they do not change frequently.