-
Notifications
You must be signed in to change notification settings - Fork 17.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
x/tools/gopls: high memory consumption #45457
Comments
Thank you for reporting this issue and attaching to the zip. One common issue with high memory usage in |
Thanks for that suggestion! Do you have any sense of what kinds of thresholds for byte slices you need to cross before you start to see problematic memory usage? |
There's no critical threshold, it's just that they're very disproportionately expensive, so a byte slice with 1M elements might take hundreds of megabytes of memory in gopls. I just mailed https://golang.org/cl/308730. It would be interesting to hear if it helps you at all. You can click the "Download" button to get clone instructions, then do |
Great! My teammate, @nik-andreev, is planning to try this CL out and see what it does for our memory usage |
I've just tried it out and still see the high memory consumption. |
I also hit
|
Small update here: I was able to slice ~8% off of heap utilization by moving some of our byte slice literals to use |
Sorry I didn't mention here: the latest patch set of https://golang.org/cl/308730 should no longer panic. You can try that out if you want; I'm going to try to get it merged today. |
👍 I'll check it out |
Change https://golang.org/cl/311549 mentions this issue: |
Change https://golang.org/cl/310170 mentions this issue: |
This broke staticcheck and x/tools/refactor, most notably used for our rename support. Doesn't look like a winner. Roll it back :( Updates golang/go#45457. Change-Id: I30d5aa160fd9319329d36b2a534ee3c756090726 Reviewed-on: https://go-review.googlesource.com/c/tools/+/311549 Trust: Heschi Kreinick <heschi@google.com> Run-TryBot: Heschi Kreinick <heschi@google.com> Reviewed-by: Robert Findley <rfindley@google.com> gopls-CI: kokoro <noreply+kokoro@google.com>
We still hear from users for whom gopls uses too much memory. My efforts to reduce memory usage while maintaining functionality are proving fruitless, so perhaps it's time to accept some functionality loss. DegradeClosed MemoryMode typechecks all packages in ParseExported mode unless they have a file open. This should dramatically reduce memory usage in monorepo-style scenarious, where a ton of packages are in the workspace and the user might plausibly want to edit any of them. (Otherwise they should consider using directory filters.) The cost is that features that work across multiple packages...won't. Find references, for example, will only find uses in open packages or in the exported declarations of closed packages. The current implementation is a bit leaky; we keep the ParseFull packages in memory even once all their files are closed. This is related to a general failure on our part to drop unused packages from the snapshot, so I'm not going to try to fix it here. Updates golang/go#45457, golang/go#45363. Change-Id: I38b2aeeff81a1118024aed16a3b75e18f17893e2 Reviewed-on: https://go-review.googlesource.com/c/tools/+/310170 Trust: Heschi Kreinick <heschi@google.com> Run-TryBot: Heschi Kreinick <heschi@google.com> gopls-CI: kokoro <noreply+kokoro@google.com> Reviewed-by: Robert Findley <rfindley@google.com> TryBot-Result: Go Bot <gobot@golang.org>
I checkout this patch, but the problem with high memory consumption still persists. My computer freezes completely after trying to go to a definition inside system
go version
gopls version golang.org/x/tools/gopls master
golang.org/x/tools/gopls@(devel) I stoped the process because as I told, my computer freezes completely, so the generated log is only 2gb but it increases a lot more than that. I had this problem using both neovim and vim with vim-go or Coc-go If you need more information I'll be glad to provide. |
@drocha87 Please file a new issue. If you're working in an open source repository, full repro instructions would be helpful. |
I'm not sure about open a new issue since this one is still open and I'm reporting the feedback about the patch proposed in the discussion here. The repo I'm working on is a private repo, but I'm having high memory consumption when I try to go to a definition inside the aws-sdk-go repo, so I think is pretty straight forward to repro this issue. Right now I don't have time to setup a repro, but I'll be glad to do that in my free time. |
Any update on this? I am working in a mono repo (private) and I can't even use gopls. It climbs within 25 minutes of usage and hits 16+ Gigs between go and gopls and never finished loading the workspace. I get context deadline exceed and then it stops. |
|
(please open a new issue if you are still struggling with memory usage) |
Thanks! We've updated to |
@mikeauclair thanks for following up! That's great to hear. |
Hi @folago Are you working in open source? Are you willing to help us understand your workspace? To start, could you share the output of |
Not opensource unfortunately. I now use I switched to
Anything else I can provide? |
That output shows gopls using only 617 MB after its initial workspace load, which indicates that most of the memory is spent holding data related to open packages. I'll also note that your largest workspace package has 572 files, which is quite large. Elsewhere, we have seen similar problems related to large packages. Can you do an experiment? Start by opening a low-level package that does not import this large package transitively (something like a "util" package that has few imports). Does gopls stabilize at a low memory usage? Then open a package that has a large import graph, and which imports this large package transitively (something like a "main" package). Does this cause your memory to blow up? As a next step, we can try looking at a memory profile. I am confident we can fix this problem, if you are willing to help us investigate. I will open a new issue to track this fix. |
Yes, this is what I observed. If I stay away from some packages with the accursed large dependency the memory is quite low with
Yes this is what I have experienced,
You mean start add |
This is the profile I get by opening a package without the huge dep. I can open the bad one and try to stop it before it gets oom killed or my machine freezes. |
Cannot manage to get a profile when I open the huge dependency, once |
Thanks @folago -- the profile after opening the huge dep is what we need. Here's how you can get it: start gopls with for example That should allow you to grab a snapshot of the gopls heap in action. If you can share the entire result, that would be great. But we would also learn a lot simply from the output of |
Ah nice! I made a small script to curl the heap profile to a file each second since it was difficult to catch it before my machine got unusable. This is the second last, the last one was empty. If the other profile can be useful to compare let me know. Posting also the script in case there is something wrong with it that I missed: #! /bin/sh
for i in {1..1000}; do
curl http://localhost:6060/debug/pprof/heap > heap.$i.out
sleep 1
done |
Also these are my settings for gpls in neovim: settings = {
gopls = {
analyses = {
unusedparams = true,
unusedwrite = true,
fieldalignment = true,
nilness = true,
},
staticcheck = true,
-- gofumpt = true,
templateExtensions = { 'tpl', 'tmpl' },
},
}, Just in case this is helpful, since I have few analyses turned on. |
Thanks so much for helping us investigate! Have not yet investigated your profile, but already I have a theory: both the nilness analyzer and staticcheck use a form of SSA/SSI analysis that can be quite expensive. That could explain the disproportionate cost of analysis in certain packages. |
I can try to disable those two and send another profile. |
It is weird tho that it reads so much less from disk, before it was in the GB range I think once I saw it go to This is the gpls config in neovim: analyses = {
unusedparams = true,
unusedwrite = true,
fieldalignment = true,
-- nilness = true,
},
-- staticcheck = true,
-- gofumpt = true,
templateExtensions = { 'tpl', 'tmpl' }, And here the second last profile: |
Nice. I think we have a plausible theory. Now for a fix... For some background: analyzers that do SSA analysis are essentially broken in v0.11.0, because they do not have generated SSA for non-workspace packages. We "fixed" this in v0.12.0, but clearly that fix is not viable in some cases.
That's a useful datapoint. Most likely this is serialization of SSA structures. In particular, if they take more than 1GB, you'll be churning through the fixed-size cache at a high rate, which could lead to additional performance problems. |
I see, thanks for the explanation and for the workaround. I guess I can live without those checks for now, looking forward to the next release. And let me know if there is anything else that I can do to speed up the diagnosis. |
@folago you might try enabling just nilness. I would be curious to know if that performs OK. If so, it indicates we're doing something wrong with staticcheck's analysis facts. I would expect nilness and staticcheck to have similar costs, as both are constructing an IR for the source being analyzed. |
Yep, nilness works! settings = {
gopls = {
analyses = {
unusedparams = true,
unusedwrite = true,
fieldalignment = true,
nilness = true,
},
-- staticcheck = true,
-- gofumpt = true,
templateExtensions = { 'tpl', 'tmpl' },
},
}, |
While |
Thanks. As noted in #61178, the profile definitely indicates that the majority of allocation is related to staticcheck. Now that we have an approximate cause, let's continue discussion on that issue. My guess is that we will be able to reproduce, by enabling staticcheck with some repositories we know to be similarly problematic for analysis. |
What version of Go are you using (
go version
)?Does this issue reproduce with the latest release?
Yes
What operating system and processor architecture are you using (
go env
)?Ubuntu 20.04.2 LTS
go env
OutputWhat did you do?
Using emacs with lsp-mode + gopls, gopls used 10GiB memory
(if it's helpful, this is is a monorepo with a high volume of generated gRPC code)
What did you expect to see?
Lower memory consumption
What did you see instead?
10GiB memory consumed
Attached gopls memory profile
gopls.1480857-10GiB-nonames.zip
The text was updated successfully, but these errors were encountered: