-
Notifications
You must be signed in to change notification settings - Fork 255
Linux memory usage regression #1188
Comments
Thanks for testing Windows, so perhaps it's not affected. I did try selecting system & jemalloc allocators but they both had similarly high usage for me on Linux. |
I believe we build Rust with jemalloc (only libstd?), so the distribution artifacts (incl. RLS) should use jemalloc. Since system allocator is being selected since recently, is it possible we use both allocators (system for locally-built RLS space and jemalloc for the dynamically linked rustc-land), leading to increased memory usage? |
I bisected the memory regression based on nightly toolchain releases Testing as above with the regex crate:
|
I can also deduce this regression is caused upstream if I can reproduce it using the same rls source + different compilers. For example using current rls code (173be77), plus a small fix for older compilers
Again testing as above with the regex crate:
|
With 1.32 coming out I thought I'd retest rls memory usage to provide some more recent numbers.
So this regression has not been addressed and will affect the next stable release. |
Since switch to system as default allocator tools don't use jemalloc anymore: rust-lang/rust#56980 (comment) I can reproduce huge memory usage that doesn't go down even with RLS built on the very up to date Linux distro (latest available GLIBC, kernel, toolchain, etc). At this point underlying issue should be identified and fixed. EDIT: Is there a way to reproduce large memory usage by manually running a |
Yeah @mati865, you can run |
@jens1o well I was hoping for something like |
I've tried forcing use of jemalloc & adding the stuff in rust-lang/rust#57287 to rls main (though I don't understand it). These don't seem to help the memory usage regression though, or maybe I'm doing something wrong. |
I don't have good news. After switching RLS to use jemmalloc it's peak memory usage increased a lot but later it drops to numbers similar to system allocator. Clippy and Cargo were still using system allocator during testing. Valgrind and Heapstack didn't give any meaningful results. Maybe somebody more skilled could find out what's wrong, I'm out of ideas. |
10GB used for me after initial build completed on https://salsa.debian.org/Kazan-team/kazan Note that building kazan builds LLVM as part of the process, so it can take a really long time and lots of memory. |
VS Code + rls consume 3GB RAM, 1.4GB for rls itself. This is a huge issue on systems with 8GB RAM. I'll have to upgrade the RAM on my laptop because of this. |
I just did Massif runs on rust-lang/regex, RLS itself and a Keep in mind that this is with This is a simple output for run on RLS (some packages were cached but cargo needed to build some from scratch at the beginning):
While it peaks at ~440MB, after everything is rebuilt it then stays on idle at 55MB. For regex package it's slightly different (reopened folder, previously RLS built the dependencies):
Then idle is 130 MB. What's interesting in this case is that there's
which make up around 100 of the total ~130MB. So while we're not in the gigabyte order of magnitude during idle (at least I can't confirm it, still need to test some on-and-off development and see if there is any prominent leakage) we still can improve things. @nnethercote would you have any tips or insights on how we could debug it better, seeing as you profiled a lot of the Rust compiler itself and as the author/maintainer(?) of Massif? |
First, use massif-visualizer. It's a graphical viewer for massif data that is much better than ms_print. Second, if you build your own version of Valgrind from trunk (or wait until 3.15 comes out, which I believe should be soon) Massif will get better, because it can now use debug info to get better stack traces. The Massif that comes with Valgrind 3.14 and earlier doesn't know about inlining, so any inlined frames aren't shown, and in Rust code there is usually a lot of inlining. See here for how to build your own Valgrind. Third, if you build your own (or wait until 3.15) you can use the new and improved version of DHAT, which tracks peak memory usage as well as a bunch of other things relating to heap memory. I find its presentation of the stacks at peak memory a bit nicer, but it doesn't have the time-based graph, and in your case the graph is useful because there are multiple peaks. |
Testing with regex I tested with regex & rls checked out next to each other. Run in regex dir:
|
What can be done about this? The bug has existed since mid-Decmeber and is P-high. By not fixing this, it seems like Linux is being treated like it doesn't exist. I just recorded RLS at 2.2 GB (on a machine with 8 GB) in VS Code with no files open. I had worked on Rust files previously, but it had been at least half an hour since I had any open. I am absolutely willing to help, but given that no one is even assigned to this, I'm losing faith. If someone can point me in the right direction, that would be great. What changed between 2018-11-03 and 2018-11-06, as indicated in this issue? I presume that's a starting place. |
@jhpratt Linux has the best overall support. This issue is just harder to debug.
Rust changed the way it links jemalloc, there is more info in Rust issue: rust-lang/rust#56980 |
Without upstream progress I think we'll need #1307 which should address this as the compiler memory usage will die with the process. |
Will this issue ever be solved? RLS makes my PC totally freeze within minutes from opening VSCode |
@alexcrichton @alexheretic would you please release the solution in #1307 at least in linux releases because development with RLS on Linux is really a painful experience due to this memory leak bug. It becomes exponentially worse as the project grows. |
@gophobic @gh67uyyghj In the meantime you can give a try to rust-analyzer. It's frequently updated and works much better than RLS (on my setup at least :p) |
@fstephany thank you, I just installed |
FWIW This is now bad enough that I've disabled RLS in VS Code due to the extremely frequent restarts necessary. Memory would rise to over a 1.5 GB within a matter of a minute or two. This is just developing the time crate, not anything terribly large. |
In case tougher reproductions are needed: In projects using godot-rust I'm hitting 6 GB (often sending my machine into swap-freezes). |
Could you try using nightly RLS and see if that’s still the issue?
…On Fri, 15 May 2020 at 10:52, Fabian Keller ***@***.***> wrote:
In case tougher reproductions are needed: In projects using godot-rust
<https://github.com/GodotNativeTools/godot-rust> I'm hitting 6 GB (often
sending my machine into swap-freezes).
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#1188 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAXTFXIW5UCGWCL3IYX42NDRRT7DVANCNFSM4GKNBBFQ>
.
|
I'm using stable, and the memory usage is still terrible. Maybe language server is just a bad idea for every language. Non-long-running shared library plugins would work better. |
The fix hasn’t been backported, so stable and beta are not fixed and have
the memory issue.
…On Sat, 16 May 2020 at 21:50, buckle2000 ***@***.***> wrote:
I'm using stable, and the memory usage is still terrible.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#1188 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAXTFXND7JYG4SSLNHWNPTDRR3VBBANCNFSM4GKNBBFQ>
.
|
rls 1.41.0 still alloc huge memory (about 4.5G, on a machine with 8 GB, ubuntu 18.04) . |
I noticed RLS can consume quite a lot of memory lately. Upstream: rust-lang/rust#56980
Idle memory usage
Memory usage running
rls --cli
. Reading taken after running once until built, quitting and re-running again and waiting for build complete.So current stable gives a decent idea of where it should be. I tested beta channel out of interest and had results similar to current nightly.
Status
Tests use regex
d4b9419
on Arch Linux2019-03-10
Update 2019-07-04
Update 2019-08-16
The text was updated successfully, but these errors were encountered: