Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High memory usage situation, can we use an lru cache to deal with Document parse? #720

Closed
Zoupers opened this issue Aug 9, 2022 · 5 comments
Labels
enhancement New feature or request robotframework-ls

Comments

@Zoupers
Copy link
Contributor

Zoupers commented Aug 9, 2022

Is your feature request related to a problem? Please describe.
When we are deal with a big project, for example, a situation that the data write within the test case and the data is huge, lsp store all the source code that may cause a huge memory useage which is almost 7 times the test case it self.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
I studied the source code and found that the main reason why this happend is we cache all the Document within the workspace. However, we can use some cache strategy to shrink the memory usage, lru maybe a good choice. I have not learn as deep as possible to analyse what the Document are used to do exactly, so please let me know if that will cause many problems.

Describe alternatives you've considered
If some of us want to cache all to speed up, maybe we can take that as an option of vscode configuration which obviously a nice choice.

@Zoupers Zoupers added enhancement New feature or request robotframework-ls labels Aug 9, 2022
@fabioz
Copy link
Collaborator

fabioz commented Aug 9, 2022

Agreed... that may really be an issue if you have a really big workspace.

Just as a reference, how much memory are the language server processes using in your case? How many .resource/.robot files do you have? Is this some code that can be shared?

We should probably strive to do an lru based on the size of the documents to compute when items should be evicted instead of a plain count of items.

Also, we can probably do some pseudo-interning in the names used so that the ASTs can share the names used instead of creating new strings to consume less space.

@Zoupers
Copy link
Contributor Author

Zoupers commented Aug 9, 2022

I am sorry that I cannot share my project. But I can share some index with you. First, I have over 1000 .resource/.robot files and the size of them is approximately 800M, and the memory usage of the lsp is about to be 6G.
If you want to make some data to test the performance, I am happy to offer some example.
Based on the size of document to control is a brilliant idea and we can assign the limit according to the mem size.

@fabioz
Copy link
Collaborator

fabioz commented Aug 9, 2022

If you do have some data/example to test the performance, that'd be nice...

@fabioz
Copy link
Collaborator

fabioz commented Aug 10, 2022

As a note, I've created a pre-release (1.0.1) with a LRU cache for the docs.

Can you check if it improves the situation for your use-case?

@Zoupers
Copy link
Contributor Author

Zoupers commented Aug 11, 2022

Yes! Your work is excellent!

@Zoupers Zoupers closed this as completed Oct 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request robotframework-ls
Projects
None yet
Development

No branches or pull requests

2 participants