Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmark framework to sway-lsp #4927

Merged
merged 14 commits into from
Aug 11, 2023
Merged

Add benchmark framework to sway-lsp #4927

merged 14 commits into from
Aug 11, 2023

Conversation

JoshuaBatty
Copy link
Member

@JoshuaBatty JoshuaBatty commented Aug 9, 2023

Description

This PR adds a way to benchmark the LSP requests and TokenMap methods using criterion. I've also added a large sway benchmark project that can be used when running benchmarks now and in the future to measure performance improvements.

Here is the result of the benchmarks.

Request Type duration
hover 442ms
documentHighlight 441ms
definition 433ms
prepareRename 433ms
rename 436ms
semantic_tokens 464ms
document_symbol 447ms
completion 893ms
inlay_hints 15ms
code_action 444ms
code_lens 34ms
on_enter 572ms
TokenMap method duration
tokens_for_file 230ns 10ms
idents_at_position 438ms
tokens_at_position 437ms
token_at_position 439ms
parent_decl_at_position 437ms
Compilation duration
compile_and_traverse 1.795sec

related to #4901

Checklist

  • I have linked to any relevant issues.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have updated the documentation where relevant (API docs, the reference, and the Sway book).
  • I have added tests that prove my fix is effective or that my feature works.
  • I have added (or requested a maintainer to add) the necessary Breaking* or New Feature labels where relevant.
  • I have done my best to ensure that my PR adheres to the Fuel Labs Code Review Standards.
  • I have requested a review from the relevant team or maintainers.

@JoshuaBatty JoshuaBatty self-assigned this Aug 9, 2023
@JoshuaBatty JoshuaBatty added the language server LSP server label Aug 9, 2023
@JoshuaBatty JoshuaBatty requested review from a team August 9, 2023 03:44
@JoshuaBatty JoshuaBatty enabled auto-merge (squash) August 9, 2023 05:12
@anton-trunov
Copy link
Contributor

LGTM! Perhaps we should add some kind of developer documentation where it says we actually have benchmarks and how to run those?

@xunilrj
Copy link
Contributor

xunilrj commented Aug 9, 2023

Some methods like tokens_for_file are lazy. Maybe that is why their result is so good. You should collect() its results to a Vec.

I am also wondering if it is worth to serialize responses to JSON. Because this way we also assess the "size" of the response.

sdankel
sdankel previously approved these changes Aug 9, 2023
Copy link
Member

@sdankel sdankel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome work!

If the benchmarks don't take too long to run, it would be cool to integrate the request benchmarks into CI with a script like this. We could detect any change in the compiler or LSP code that significantly slows things down.

sway-lsp/benches/lsp_benchmarks/requests.rs Show resolved Hide resolved
@JoshuaBatty
Copy link
Member Author

Some methods like tokens_for_file are lazy. Maybe that is why their result is so good. You should collect() its results to a Vec.

Good point, will modify the benchmark to collect into a Vec for this one.
I'll have a think about measuring the size and address that in a later PR. Thanks for the review!

@JoshuaBatty
Copy link
Member Author

If the benchmarks don't take too long to run, it would be cool to integrate the request benchmarks into CI with a script like this. We could detect any change in the compiler or LSP code that significantly slows things down.

Nice, yeah something like that could be cool to integrate into our CI down the track. Will have a think about what would be useful for us.

@JoshuaBatty JoshuaBatty requested review from sdankel, xunilrj and a team August 10, 2023 01:45
@JoshuaBatty
Copy link
Member Author

LGTM! Perhaps we should add some kind of developer documentation where it says we actually have benchmarks and how to run those?

I've added a page under the Engineering/LSP page in the internal Notion docs for now. We could perhaps add another .MD to this repo at some point if the need arrises.

If you want to run them yourself just cd into the sway-lsp directory and run cargo bench

sdankel
sdankel previously approved these changes Aug 10, 2023
Copy link
Member

@sdankel sdankel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit:

@JoshuaBatty JoshuaBatty requested a review from a team August 10, 2023 02:38
@JoshuaBatty JoshuaBatty requested a review from sdankel August 11, 2023 00:49
@JoshuaBatty JoshuaBatty requested a review from a team August 11, 2023 02:36
@JoshuaBatty JoshuaBatty merged commit f96d7e7 into master Aug 11, 2023
@JoshuaBatty JoshuaBatty deleted the josh/benchmarking branch August 11, 2023 07:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants