-
Notifications
You must be signed in to change notification settings - Fork 89
High memory usage #310
Comments
I'm experiencing the same issue but also with very high CPU usage as well. This issue is a lot more noticeable with diagnostics enabled. |
aah, yes this is very likely related to having diagnostics enabled. When you enable diagnostics it enables the typechecker, which can be a huge memory hog. Unfortunately there is no quick fix here, other than disabling diagnostics. |
Note: as stated by @keegancsmith this happens if the typechecker gets started, which happens quite a few times: diagnostics, references, implementation... see https://sourcegraph.com/github.com/sourcegraph/go-langserver@7df19dc017efdd578d75c81016e0b512f3914cc1/-/blob/langserver/loader.go#L27:23&tab=references |
yeah this is a pretty bad problem for us, since the typechecking is so useful :) I think the future is bright (once we have time to implement it), since the caching stuff in go has a lot more useful information for us, which means we can probably rely on the on-disk caching go has now. |
This seems to be closely related to #209. Both issues are due to the typechecker. |
Having the same issue here. It sometimes takes up all available memory (e.g. 30GB), resulting in OS freezing. Just a guess, but this feels more like a bug than an issue of something being inefficient. Any details I can provide? |
You can set If the memory usage is coming from typechecking and not a regression in e.g. leaking memory, then we likely cannot do anything yet. The long term fix for this will be in the official Go language server which the Go developers are working on actively (it is a difficult problem to solve). |
I've been using the language server since yesterday and it was relatively well behaved, using up to only a few hundred MB. This morning, I started making some edits and the language server started consuming 80-100% CPU and the memory spiked up to 5GB. I managed to capture a heap snapshot: heap.zip Also managed to catch the tail end of the CPU spike: cpu.zip. It looks like it might just be the heap collector though. If it happens again I'll try collect a CPU profile first. I should also note that this only lasted about a minute or two, and the CPU and memory usage dropped down again. |
This is probably a better CPU profile than previous one: cpu.zip |
This time the heap grew to 10GB: heap.zip |
Both traces show the memory was allocated in the If you notice the memory usage does not drop down after a minute or two, that would indicate a leak and a bug we could fix, however. |
I just had another occurrence where the Here's the heap and CPU graphs: It seems to be the loader package still, although the second dump appears to be involving the build package as well. Is there nothing that can be done about this in the interim? |
I'm using Atom to and its go-langserver integration. Memory usage is close to 18Gb now as I type this. I cannot capture the memory/CPU profiles because the port is not open.
BTW, this was triggered when I changed a function name, and many compilation errors were triggered. After 4 or 5 minutes, memory went down to 1Gb, but my computer was quite slow in the meantime.
The text was updated successfully, but these errors were encountered: