-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmark LSP Event Execution Times for Rust Analyzer vs sway-lsp #4901
Comments
Ok, in order to create compare the 2 language servers, I first used chatGPT to create a large ~1700 line rust project with lots of types and variable assignments. I then created a new sway project and tweaked the file slightly so it would compile. The rust and sway projects can be found here. I then triggered each of the below LSP features 10 times in both
I was expecting to see such a large difference in I wasn't expecting our requests to take such a long time though. I plan to look into how we can optimise these next. |
Ok I set up a benchmarking framework using criterion so we can get more accurate results.
They all look like they take roughly the same amount of time when unit testing the function in isolation and not taking the client <=> server interaction into consideration. I decided to perform benchmarks on the functions defined on our internal
It seems that if we can optimise these methods then all those wins will bubble up the LSP requests as well. @xunilrj made a good suggestion about also serializing the responses to JSON so can assess the "size" of the response. |
## Description This PR adds a way to benchmark the LSP requests and `TokenMap` methods using `criterion`. I've also added a large sway benchmark project that can be used when running benchmarks now and in the future to measure performance improvements. Here is the result of the benchmarks. | Request Type | duration | |----------------------|---------------| | hover | 442ms | documentHighlight | 441ms | | definition | 433ms | prepareRename | 433ms | | rename | 436ms | | TokenMap method | duration | |----------------------|---------------| | tokens_for_file | 230ms | idents_at_position | 438ms | | tokens_at_position | 437ms | token_at_position | 439ms | | parent_decl_at_position | 437ms | related to #4901
## Description This PR speeds up all features in the language server, at the cost of a small performance hit during AST traversal. The first optimisation is the `Position` type in `sway-types/src/span.rs`. This has been changed to take a reference instead of using clones. The `line_col()` method was getting called a lot in order to convert between the span's start and end to the `lsp_types::Range` type that is used in LSP. As such, we were calculating this for every token in the language server when iterating over the `TokenMap` for all of the features. Removing the clones led to a 40% speed up. The last optimisation introduces a new `TokenIdent` type. This is similar to `sway_types::Ident` but holds data that is more useful for LSP, specifically, a `PathBuf` and `Range`. We are now precomputing these _during_ AST traversal and using the new `TokenIdent` as the key into the `TokenMap`. This avoids needing to pass the source engine around to convert `SourceId`'s to paths and also avoids needing to continually compute the range more than once. I think the small performance hit we take moving this computation to the traversal stage is worth the insane speed increases we get for all other LSP features. I have a plan in #4959 to parallelize traversing the AST's using the `rayon` crate in a follow up PR that overcomes this performance regression. Here is the result of running `cargo bench` with these optimisations applied. | Type | Previous Duration | Current Duration | Speedup | Improvement % | |---------------------|-------------------|------------------|---------|---------------| | on_enter | 572ms | 567ns | 1008818.34x | 100.0% | | code_lens | 34ms | 16.059µs | 2117.19x | 99.95% | | idents_at_position | 438ms | 3.281ms | 133.50x | 99.25% | | prepareRename | 433ms | 6.635ms | 65.26x | 98.47% | | token_at_position | 439ms | 6.76ms | 64.94x | 98.46% | | definition | 433ms | 6.726ms | 64.38x | 98.45% | | hover | 442ms | 7.389ms | 59.82x | 98.33% | | document_symbol | 447ms | 7.49ms | 59.68x | 98.32% | | semantic_tokens | 464ms | 24.48ms | 18.95x | 94.72% | | completion | 893ms | 51.03ms | 17.50x | 94.29% | | parent_decl_at_position | 437ms | 42.49ms | 10.28x | 90.28% | | tokens_at_position | 437ms | 42.87ms | 10.19x | 90.19% | | code_action | 444ms | 56.97ms | 7.79x | 87.17% | | documentHighlight | 441ms | 115ms | 3.83x | 73.92% | | rename | 436ms | 115.35ms | 3.78x | 73.54% | | inlay_hints | 15ms | 9.75ms | 1.54x | 35.0% | | compile_and_traverse | 1.795sec | 1.9715sec | 0.91x | -9.83% | | tokens_for_file | 10ms | 11.37ms | 0.88x | -13.7% | Related to: #4901
Closing this as we have benchmarking in the server now and the rust-analyzer speeds have been captured in this issue. |
It would be beneficial to create identical projects in Sway and Rust, and then benchmark the time taken to execute each Language Server Protocol (LSP) event in rust-analyzer and sway-lsp.
This would give us a quantitative basis for understanding the performance differences between the two implementations. Identifying any significant discrepancies could lead to targeted optimization efforts.
The text was updated successfully, but these errors were encountered: