-
Notifications
You must be signed in to change notification settings - Fork 12.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Strategies for improving incremental Check time? #13538
Comments
True in some places, not true in others. Return type and parameter type annotations will generally make things slightly faster (the first avoids a union type construction in cases where there are multiple return expressions, the second causes contextual typing to do less work); adding type annotations to variable declarations in theory actually slows things down (we have to compute the initializer expression type either way, so an explicit type annotations adds an assignability check). |
@RyanCavanaugh Is that true for incremental compilation, or just for full compiles? Any other tips? |
Typecheck results don't carry over during incremental compilation, so anything related to the type system applies equally to both. The difference between the first and later execution times is just warmed cache / JIT and a small number of lazy-init'd things (also I think we may only do lib.d.ts checking on the first compile?). Another thing you can try is |
Confirmed that What would you look at next? The target is to get incremental build times down under 1s. It seems unreasonable that a 1 line change should take so long to recompile. |
Also see #10878 |
Here's some build times I collected for our app. Obviously, these are specific to the way our build pipeline and bundles are set up. Hopefully this helps someone in the future:
In the end, we ended up using Webpack for builds, and driving it with Gulp. This cut our incremental build times in half. Now the bottleneck is TSC's incremental compilation. |
My suggestion is to skip all the type checking in watch mode. You can generally configure this in webpack's loader by options like Then in full build type checking can be turned on again for quality assurance. Caveat: decorator's metadata requires type info so transpiling won't be enough. |
@HerringtonDarkholme That's a really interesting idea! My fear is that without typechecking it will be annoying to catch issues where type changes break file/module interfaces (ie. export A's type changed, breaking module B when it imports A). I'll try it out and report back. At a higher level, maybe you can help me understand why incremental checking takes so long. If I write some bad TypeScript, VSCode highlights the guilty code in red immediately (<1s). If this is what the "Check" step does during TS compilation, why does it take the TS compiler so long to complete that step, where VSCode does it so quickly? My mental model is of code as an AST, with types attached to nodes. When a type changes due to a code change:
Is this an accurate representation of how TSC incremental compilation works? If so, why does it take so long (time seems to scale with number of types)? If not, how does it actually work? What is cached between incremental compilations, and what has to be recomputed? |
The problem is that declarations in one file can affect errors, and emit, in arbitrary other files: // foo.ts
interface A {
x: number;
}
// bar.ts
let x: A = { x: 100 }; If we change For emit: // foo.ts
namespace X {
// delete me
export var y = 10;
}
// bar.ts
var y;
namespace X {
let x = y;
} When we delete Fun fact: A previous implementation of TS tried to maintain a reverse mapping of dependencies such that given an invalidated node, you could determine which other nodes needed to be re-typechecked. This never worked 100%. I wrote a tool that iterated file edits and compared this "incremental" typecheck to the "clean" typecheck and produced a log file when the results were not the same; this tool never stopped finding bugs until we just gave up on the approach. In an editor scenario, when a file is changed, we immediately typecheck the changed file and then check all the other files "later". This is usually faster than you can Ctrl-Tab to another file anyway, so it appears to be instantaneous. Commandline watch compilation doesn't keep track of which files have changed and which haven't. If you are willing to get your entire project compiling under the |
This sounds like the best possible solution! Least possible invalidation, least possible re-checking. The worst case is as bad as it is today, the best case is an order of magnitude speedup, and the average case is still a big speedup.
That's a bummer, though I'm sure it was annoying to keep finding bugs for a piece of code that so many people relied on. Do you think you will ever revisit the approach? It seems like a huge win to have lightning fast incremental compiles, and a good way to win over JS devs used to fast compiles (and we could brag to our Scala and Haskell friends).
After enabling Come to think of it, I don't really understand what As @HerringtonDarkholme mentioned, |
Unlikely. It made non-incremental compilations much, much slower due to all the extra information that had to be generated and tracked. The rewrite that avoided this approach was 4x faster at non-incremental compilation -- basically, so fast that it could do a full typecheck in the time it took the old system to do an incremental typecheck. It's also worth noting the current typechecker can "start anywhere" - it doesn't need to do a full walk to compute a type at an arbitrary location. So for example when you type a new expression into a file and hover over it, there isn't a full typecheck needed to compute the type of the expression; we can simply compute what's needed. So in that regard it's nearly the best-of-both-worlds, minus the odd case of wanting to get a complete set of program errors given a small change in one file.
|
is there a way to just skip type checking for incremental builds? Is there a command line flag for that? All I want for incremental builds is to transpile. Looking in the above comments, is |
#10879 ? |
@ORESoftware What are you using for bundling? With Webpack, you need to set:
|
I see. And keeping a reverse dependency graph only in incremental compile mode (but not in full compile mode) would probably be too many code paths to reasonably maintain.
That's due to the laziness alluded to here right? Thanks for all the help. I hope that with enough community feedback we can think of some other ways to improve incremental compile time. As a user, this is key for usability/productivity. |
Our company was also affected by slow incremental compilation times. We have really large code base and changing just one file would spent 45-60 seconds to recompile it. Solution we end up with is running file watcher and whenever .ts file is changed - create temporal tsconfig.json just for this one change and compile that "project". This got us down to 10-15 seconds range (a lot of time is spent in grunt tasks, so actual tsc time is probably around 5-7 seconds) Hope this might give some actionable idea for tsc developers. |
45-60 seems like extremely long. can you share your |
looks like time actually improved recently. Below are stats. And no, we can't split it at this time. tsc --diagnostics
tsc -p tsconfig.onefile.json --diagnostics
tsc -w --diagnostics
|
this is much better :). have you considered, 1. splitting the project in multiple compilation units? 2. not checking for libraries with --skipLibcheck? on a related but different note, is your code base written as modules? |
No, we are not going to split codebase since we already solved it using technique i described. so nothing to fix there. I remember trying skiLibcheck but there were some issues with it. Again, not going to fix what already works :) Yes, codebase is all modules which get compiled into AMD ones and optimized later with r.js |
We too have a big codebase (~5k files). namespace mycompany.myapp.project
{
export class MyClass extends ...
{
}
} We had to put the max_old_space_size option for nodejs because the watch crashed after 3 builds otherwise. I already checked this github project so I guess the answer to the following question, but let's try again. Is there any tool to analyze the time spent in check and emit times? The codebase is "too big" to analyze each file by hand to look for complex generics types and other things. Node version is 7.3.0. Compiler options are
The generated files:
The other files (watched)
|
I too noticed a problem that for the same change compilations with different scopes differs much in check/emit time: I think there definitely should be a room for TSC optimization in these terms. |
Within Google we organize code into "libraries" (typically one per directory, each containing usually around 10 source files) and compile each library with a separate invocation of tsc. Each library compilation generates d.ts files (using the We also use a persistent tsc process across compilations, which means even when just making changes inside a single library, the compiler only needs to reload the files you made changes to, and it can keep the other files cached. We measure the compilation time experienced by developers across the company as they write code and with this system we find the median build time is currently just under one second. (If you change the API of a library low in the dependency stack, then all downstream libraries also rebuild, which is then multiple seconds.) Unfortunately the mechanism for organizing this is all within https://bazel.build/, which is incompatible with webpack gulp etc. We are working to open source all of this because we'd like to use it with our open source TypeScript projects such as Angular. @alexeagle |
@mhegazy The underlying issue of slow incremental checks is not fixed - can we keep it open please. |
#10879 tracks updating |
Awesome- thanks as always @mhegazy! |
@evmar I was literally just exploring the exact same setup for incremental typechecking at Dropbox - we're also using bazel, with a similar sounding setup, but haven't gotten our typechecking story straight yet. Very interested to hear that it has worked well for you. Are there any particular difficulties you had getting that setup working and so performant? |
I can't think of any difficulties you'd run into (our monorepo layout inside Google makes some things hard, as we have no node_modules directory, but presumably we're the only ones doing that.) |
@alexeagle please open-source that tool :-) |
@navels filed https://github.com/bazelbuild/rules_typescript/issues/29 so you can track it |
Using TS2.1, our build times are painfully slow. This seems to be partially TS's fault, and partially Tsify's fault.
Here is the result of
tsc --diagnostics
:And 3rd compilation after running
tsc -w --diagnostics
:Using Tsify + Browserify, each incremental compile takes ~8s (~10s with sourcemaps). There's already an issue tracking this slowness on TSify's end, and I'm looking into ways to improve the bundle stitching time (Rollup, SystemJS, concat, etc.) on our end.
I was wondering what the TS team's advice is for improving Check time more generally. What are common gotchas, and patterns to avoid? Are there specific build flags that are helpful (eg.
noEmitOnError
)? A coworker pointed out that in Scala and Haskell, adding param and return type annotations to all function signatures yield huge improvements in build times - is this the case with TypeScript too?Thanks!
Also see #10018.
The text was updated successfully, but these errors were encountered: