performance measurement #221
Description
If we don't test it, we won't maintain it.
We should have easy (like 'run this script') ways to measure and record performance of key measures. We probably shouldn't worry about infra for this to start with, just need a way to do it, and for us to keep an eye on it to start with.
We should measure end to end time (i.e., from receiving a request to issuing a response) and some more precise measurements (there is already some of the latter in rls-analysis, but it is very ad hoc). We should include compiler and cargo time where appropriate, but we should exclude VSCode (or other IDE) and protocol time.
I suggest we measure the following for small (e.g., hello world), medium (e.g., the RLS), and large projects (e.g., Servo). (We should take snapshots of the projects rather than updating them from crates.io or whatever).
- cold startup time
- warm startup time (we need to precisely define what these two mean)
- code completion ('free' and after a dot, probably for a number of types with different characteristics:
- same crate vs upstream crate vs std lib
- generic type vs concrete type
- lots of options vs few options
- etc.)
- jump to def
- find all refs
- ident search
Maybe some other core stuff? I don't think we should test everything - this should be easy to maintain, update, and run.