-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High memory usage situation, can we use an lru cache to deal with Document parse? #720
Comments
Agreed... that may really be an issue if you have a really big workspace. Just as a reference, how much memory are the language server processes using in your case? How many .resource/.robot files do you have? Is this some code that can be shared? We should probably strive to do an lru based on the size of the documents to compute when items should be evicted instead of a plain count of items. Also, we can probably do some pseudo-interning in the names used so that the ASTs can share the names used instead of creating new strings to consume less space. |
I am sorry that I cannot share my project. But I can share some index with you. First, I have over 1000 .resource/.robot files and the size of them is approximately 800M, and the memory usage of the lsp is about to be 6G. |
If you do have some data/example to test the performance, that'd be nice... |
As a note, I've created a pre-release ( Can you check if it improves the situation for your use-case? |
Yes! Your work is excellent! |
Is your feature request related to a problem? Please describe.
When we are deal with a big project, for example, a situation that the data write within the test case and the data is huge, lsp store all the source code that may cause a huge memory useage which is almost 7 times the test case it self.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Describe the solution you'd like
I studied the source code and found that the main reason why this happend is we cache all the Document within the workspace. However, we can use some cache strategy to shrink the memory usage, lru maybe a good choice. I have not learn as deep as possible to analyse what the Document are used to do exactly, so please let me know if that will cause many problems.
Describe alternatives you've considered
If some of us want to cache all to speed up, maybe we can take that as an option of vscode configuration which obviously a nice choice.
The text was updated successfully, but these errors were encountered: