-
Notifications
You must be signed in to change notification settings - Fork 12.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updated: Only auto-import from package.json #32517
Updated: Only auto-import from package.json #32517
Conversation
Here’s a durable link to the diff between this and the original PR: https://github.com/andrewbranch/TypeScript/compare/old-package-json-pr..enhancement/only-import-from-package-json |
@typescript-bot pack this |
Heya @andrewbranch, I've started to run the tarball bundle task on this PR at 9cdad91. You can monitor the build here. It should now contribute to this PR's status checks. |
Hey @andrewbranch, I've packed this into an installable tgz. You can install it for testing by referencing it in your
and then running |
@andrewbranch thanks for the ping. I will try this PR tomorrow on PC with that project. |
@andrewbranch perf looks good. There is some delay, but it is barely noticeable (in comparison with latest nightly, and huuuuge difference if compared with problematic builds). As for completions: looks like second problem where completion list was missing auto imports is resolved, but i will diff returned list a bit later to see if there are any unexpectedly filtered items. These are lines from
|
Thanks for checking, @IllusionMH! Your absolute times are actually a little better than mine now, but the diff is similar—even with the fix, the time is almost double what it was before. ~200ms isn’t awful in itself, but I don’t feel great about doubling the time. The problem is, almost everything that’s happening to get the full list is now fairly important and unavoidable. However, when gathering auto imports, the same exact stuff happens every time you keydown, always producing the same results. Making edits in one file is highly unlikely to change the availability of auto-imports to that file. And in fact, making edits to any of your own source files is quite unlikely to change the availability of auto-imports of node_modules into your project. I think there may be an opportunity to cache auto-import completions for at least a session of editing a single file. I would be ok if the first |
@andrewbranch BTW, can you also fix #32273 like #31893? |
That’s not really related to what’s happening here. What I’m doing is filtering auto-imports from all the things in the program down to ones that are relevant to you. What you seem to be asking for is for things that you don’t explicitly depend on to be excluded from the program entirely, even though other things depend on them. Which auto-imports are surfaced doesn’t fundamentally change how type checking works; excluding hard dependencies from the program does. There are other threads (what you’ve linked to) that already discuss the problems with that, so this PR isn’t a good place to get into it. But I’d also point you to #31894, if you haven’t yet seen it—that looks more relevant to your concerns. |
I see, thanks. |
Restores (and improves upon) the behavior introduced in #31893 without the significant performance regression by adding several caches. (Closes #32441.)
Diff compared to original PR (trying to keep changes to master out of it but will fall behind every now and then)
Note: comments about performance and bugs and changes in the thread below are talking about various states during the draft lifetime of this PR. This PR description reflects the state as of the time I’m changing it from “Draft” to “Ready for Review,” August 26 at 12:00 PM PDT.
Performance impact of this PR
TL;DR: Disregarding the very first completions request made to TS Server, the worst-case completions request is 28% slower than before. But, getting the completions list plus completion details is only 1% slower at worst, and typical requests are 76–80% faster thanks to some added caching.
There are two language service operations to consider:
getCompletionsAtPosition
returns an array of available completions without much detailed info about each.getCompletionEntryDetails
is called for a specific entry from the results of the former operation, and gets more details about that completion, like JSDoc, symbol display parts, and CodeActions that will occur.In VS Code, the completion list UI appears after the former returns. The latter is then called in immediate succession, and its response inserts more details into the same UI. So, the duration of
getCompletionsAtPosition
is a lower bound of UI response time, but the sum of the two calls affects perceived responsiveness since the UI doesn’t settle until both are complete.I measured response times before and after this PR in a large project and took the average of 5 samples of each type of measurement reported here. In reporting timings for cached calls, I’ll report “Aug 10 Nightly” as “N/A” to indicate that the original code base performed no caching, but I’ll use the uncached timings to calculate the percent difference, since those would be the timings measured in identical scenarios.
1. The package.json cache is populated upon the first request that uses it, and future changes to package.json files update the cache eagerly. In other words, this number only ever happens on the very first completions request that a project receives.
2. Cache lives for the duration of editing a single file, except after edge-case edits to that file that modify modules visible to the program.
3. Cache lives while the program is identical, which happens frequently for details, e.g. when navigating the completion list with arrow keys to view details of other items. The only time details requests trigger without a cache is after typing rapidly, such that previous requests get canceled.
4. The worst-case scenario for this PR is a cold cache (nothing in the lifetime of the language service has yet requested package.json information) followed by a cached details call. They can’t both be uncached, because the former populates the cache of the latter. The cache could be invalidated by continuing to type quickly, but at that point the original cold request for the list would be canceled too, and the next call would use the cached package.json.
Implementation
Even the absolute worst measurement of 56% slower is much better than #31893, which was ~12,000% slower. Here’s what made the difference.
Caching symlinks discovery
Getting the best module specifier to use from one module to another requires determining if there are any symlinks hanging around. The original PR tries to get the module specifier for every possible auto-import in a loop, which re-ran the symlink discovery code every time. But, the inputs to that code are constant over that loop, so it needs only be run once. This PR introduces a function
moduleSpecifiers.withCachedSymlinks
that uses a constant set of symlinks within a callback. This alone brought the performance impact down from ~12,000% slower to ~100% slower.Caching package.json content
The previous PR searched for and read package.json files from disk during each request. This PR adds:
@sheetalkamat suggested replacing the additional package.json watch with an existing directory watch that detects changes to node_modules. I tried this approach, but it's quite common to run
npm install --save dep
whendep
was already in node_modules required by something else, ornpm uninstall --save dep
whendep
is still required by something else. In these cases, package.json changes, but node_modules does not.Caching the auto-import list itself
Generally, the auto-imports available to a particular file don’t change while editing that file alone. So, we now cache auto-imports, and invalidate them when:
@types/node
via ATA, and makes an edit that causes it to have imports of node core modules when it didn’t before, or vice versa. The same logic introduced in Proposal: If there’s a package.json, only auto-import things in it, more or less #31893 about inferring whether to offer node core module auto-imports still exists, so the cache needs to be invalidated when the input to that inference changes. (Note: I plan to make this inference smarter in a later PR—didn’t want this one to balloon in scope more than it already has.)Caching auto-imports for completion entry details
Completion details display type information, so they can’t be as aggressively cached as the list of auto-imports available. But, completion details are commonly requested when the program hasn’t changed at all, so the same cache used for the auto-import list described above can be used for completion details requests if the project version hasn’t changed.
Avoiding computing the full specifier for a node_modules module
When determining whether a module that can be referred to by some node_modules-style path e.g.
import 'foo/lib/whatever'
should be filtered out by a package.json file or not, the path afterfoo/
is irrelevant. The previous PR used an existing function that calculated the entire path, which requires FS operations (to look for a dependency’s package.json and find atypes
entry). To avoid unnecessary FS operations, this PR adds an option to that function (tryGetModuleNameAsNodeModule
) that prevents it from calculating the full path.