-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Linux] Virtual memory usage continuously grows #1284
Comments
It would be interesting to see if this is reproducible this on other platforms, like Windows and MacOS. We run Ubuntu 16.04 here. |
From |
Thanks @akosyakov - we will try that. |
@marcdumais-work, do you need to do anything special (open a file, terminal, etc) for it to happen, or you just start the backend, open a client, and that's it? |
When just opening a workspace and idling, I do see a process whose VSZ slowly but steadily grows, using
I'll let it run for a while. I don't know which of the processes it is, I opened #1372 related to this. I'll also check if the leak happens in no-cluster mode as well. |
@simark That's correct - just opening a workspace is enough. Is the process name ipc-bootstrap.js? If so I think it's a watcher process. |
Ok, so it's quite clear:
This is with |
I have three processes named like this, it's not really a good indication of what the process is for:
|
You should be able to see a server name in logs for a process id. |
An update on this issue: We tried running Theia through valgrind with the |
I was able to reproduce a similar issue using an example NSFW program. See linked issue above |
From issue #1466 I pulled and ran the theiaide/theia image from Docker Hub, and left it overnight with a client connected. Using docker stats to monitor the memory usage, when it started it was using 120MB, but after 18 hours of being up but doing nothing, memory was at 215MB. It's leaking a lot of virtual memory, and a lesser amount of resident memory, running
Reloading the client resets both
and |
A new post on the issue, we opened on In Atom, they are/were also using |
in order to avoid memory leaks because of circular symlinks Signed-off-by: akosyakov <anton.kosyakov@typefox.io>
Signed-off-by: akosyakov <anton.kosyakov@typefox.io>
I re-tested for this and unfortunately the leak is still present, apparently the same as before. However that made me think to test something: as described here the leak is reproducible with a small helloworld nsfw program and not specific to Theia. So I wondered if I would be able to reproduce it with vscode. I was not. That makes me think that they probably use nsfw slightly differently, avoiding the issue. |
@marechal-p As discussed, please have a look at how vscode uses nsfw vs our own usage. I can help you setup to reproduce the issue. |
VS Code simply doesn't use {
// Use the new experimental file watcher.
"files.useExperimentalFileWatcher": false
} By default Chokidar is used on Unix and a weird file watcher on Windows ? |
Update: Ok so when you enable this option to use Morale: If you don't like leaks, don't use edit: After some more tests, it seems like there is no leak whatsoever when using the "legacy" watchers (tested on Unix, so it must have been Chokidar). |
Signed-off-by: akosyakov <anton.kosyakov@typefox.io>
Fixed via #4128 |
We noticed that if we leave Theia running for some time with a client connected (doing nothing special), the Virtual memory used seems to steadily grow, while the reported Resident (actually used) memory is stable, at couple hundreds megs or so. This would seem to indicate that some memory is allocated but never used?
Also, if the client is restarted, the virtual memory clears after a minute or two.
ps
command, taken ~14h apart. The used virtual memory grew from ~25 GB to ~120 GBRunning the backend in "inspect" mode, I have connected with the Chrome dev tools and captured heap snapshots, near the beginning of the run, and after ~15h, but they show little, maybe 10-15 megs extra used, while the virtual memory grew from ~1 GB to 120GB+.
Heap-20180215T072813.heapsnapshot.gz
Heap-20180215T072828.heapsnapshot.gz
We have tried to run the backend with Valgrind, to hopefully trace where the leak comes from, but when we do that, the virtual memory seems to stay stable at ~1.2 Gig.
The text was updated successfully, but these errors were encountered: