Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark improvements #1178

Merged
merged 20 commits into from
Jan 10, 2021
Merged

Benchmark improvements #1178

merged 20 commits into from
Jan 10, 2021

Conversation

pepeiborra
Copy link
Collaborator

Couple of changes:

  • Fix an issue with allocation stats, moving all the logic to the shake-bench package
  • Support examples with multiple files of interest
  • Extend the 2 examples we have with an additional file of interest
  • add two new experiments:
    • "getCompletions": to measure the performance of completions alone
    • "getDefinition after edit": to measure the performance of getDefinition when stale

@pepeiborra pepeiborra requested a review from wz1000 January 9, 2021 20:01
Copy link
Collaborator

@wz1000 wz1000 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. I wonder if we can have some sort of bot to automatically copy benchmark results to the PR.

Also, now that HLS is the user facing tool, I think we should look into benchmarking that too, and seeing the impact of all the plugins on performance.

@wz1000
Copy link
Collaborator

wz1000 commented Jan 9, 2021

Kind of alarming to see that "completions after edit" and "documentSymbols after edit" allocate 10x more than everything else.

There is something funky going on with the measurements for document symbols on GHC 8.8:

lsp-types-1.0.0.1   upstream   documentSymbols after edit   True      100       15.375339168000002   0.0                   3.8893017120000004    0.6740060270000003      4.57835993            7MB            4021MB
lsp-types-1.0.0.1   HEAD       documentSymbols after edit   True      100       387.45544769         0.0                   3.9830960580000014    0.7853261429999997      4.7930202060000004    134MB          7044MB

@wz1000
Copy link
Collaborator

wz1000 commented Jan 9, 2021

Hmm, similar weirdness with allocations going on for document symbols across all GHC versions. Any idea what this is?

@pepeiborra
Copy link
Collaborator Author

Hmm, similar weirdness with allocations going on for document symbols across all GHC versions. Any idea what this is?

It's the first experiment to run, and it finds a cold cache

@pepeiborra
Copy link
Collaborator Author

pepeiborra commented Jan 9, 2021

Looks good. I wonder if we can have some sort of bot to automatically copy benchmark results to the PR.

Also, now that HLS is the user facing tool, I think we should look into benchmarking that too, and seeing the impact of all the plugins on performance.

I have it in my backlog to benchmark HLS instead of (or in addition to) ghcide. It shouldn't be too hard.

@wz1000
Copy link
Collaborator

wz1000 commented Jan 9, 2021

It's the first experiment to run, and it finds a cold cache

Ah, can we warm the cache up before running any experiments?

@pepeiborra pepeiborra added the merge me Label to trigger pull request merge label Jan 9, 2021
@wz1000
Copy link
Collaborator

wz1000 commented Jan 9, 2021

Or better yet, run all the experiments with warm and cold cache?

@pepeiborra
Copy link
Collaborator Author

It's the first experiment to run, and it finds a cold cache

Ah, can we warm the cache up before running any experiments?

Or better yet, run all the experiments with warm and cold cache?

That would be too expensive though.

Why do we care about the cold cache scenario?

@pepeiborra
Copy link
Collaborator Author

It would definitely make sense to warm the cache before running the benchmarks

@pepeiborra pepeiborra merged commit 3773010 into master Jan 10, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
merge me Label to trigger pull request merge
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants