Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Strategies for improving incremental Check time? #13538

Closed
bcherny opened this issue Jan 17, 2017 · 30 comments
Closed

Strategies for improving incremental Check time? #13538

bcherny opened this issue Jan 17, 2017 · 30 comments
Labels
Question An issue which isn't directly actionable in code

Comments

@bcherny
Copy link

bcherny commented Jan 17, 2017

Using TS2.1, our build times are painfully slow. This seems to be partially TS's fault, and partially Tsify's fault.

Here is the result of tsc --diagnostics:

Files:           706
Lines:        121760
Nodes:        516136
Identifiers:  167590
Symbols:      265945
Types:         87899
Memory used: 391582K
I/O read:      0.05s
I/O write:     0.15s
Parse time:    1.57s
Bind time:     0.76s
Check time:    5.07s
Emit time:     1.92s
Total time:    9.31s

And 3rd compilation after running tsc -w --diagnostics:

Files:           706
Lines:        121760
Nodes:        516136
Identifiers:  167590
Symbols:      265945
Types:         87899
Memory used: 684821K
I/O read:      0.00s
I/O write:     0.04s
Parse time:    0.25s
Bind time:     0.00s
Check time:    3.54s
Emit time:     0.86s
Total time:    4.65s

Using Tsify + Browserify, each incremental compile takes ~8s (~10s with sourcemaps). There's already an issue tracking this slowness on TSify's end, and I'm looking into ways to improve the bundle stitching time (Rollup, SystemJS, concat, etc.) on our end.

I was wondering what the TS team's advice is for improving Check time more generally. What are common gotchas, and patterns to avoid? Are there specific build flags that are helpful (eg. noEmitOnError)? A coworker pointed out that in Scala and Haskell, adding param and return type annotations to all function signatures yield huge improvements in build times - is this the case with TypeScript too?

Thanks!

Also see #10018.

@RyanCavanaugh
Copy link
Member

For example, a coworker pointed out that in Scala and Haskell, adding param and return type annotations to all function signatures yield huge improvements in build times - is this the case with TypeScript too?

True in some places, not true in others. Return type and parameter type annotations will generally make things slightly faster (the first avoids a union type construction in cases where there are multiple return expressions, the second causes contextual typing to do less work); adding type annotations to variable declarations in theory actually slows things down (we have to compute the initializer expression type either way, so an explicit type annotations adds an assignability check).

@RyanCavanaugh RyanCavanaugh added the Question An issue which isn't directly actionable in code label Jan 17, 2017
@bcherny
Copy link
Author

bcherny commented Jan 17, 2017

@RyanCavanaugh Is that true for incremental compilation, or just for full compiles? Any other tips?

@RyanCavanaugh
Copy link
Member

Typecheck results don't carry over during incremental compilation, so anything related to the type system applies equally to both. The difference between the first and later execution times is just warmed cache / JIT and a small number of lazy-init'd things (also I think we may only do lib.d.ts checking on the first compile?).

Another thing you can try is skipLibCheck which should get you substantial gains. Note that it is not safe to always compile with this flag on - your CI server or something should leave it off to make sure you don't break anything long-term.

@bcherny
Copy link
Author

bcherny commented Jan 17, 2017

Confirmed that skipLibCheck shaved another 1s off the incremental build time, and 2s off the full build time.

What would you look at next? The target is to get incremental build times down under 1s. It seems unreasonable that a 1 line change should take so long to recompile.

@bcherny
Copy link
Author

bcherny commented Jan 18, 2017

Also see #10878

@bcherny
Copy link
Author

bcherny commented Jan 18, 2017

Here's some build times I collected for our app. Obviously, these are specific to the way our build pipeline and bundles are set up. Hopefully this helps someone in the future:

Stack Full build time (s) 3rd run incremental build time (s)
Gulp + Browserify + TSify 37 10
Browserify + TSify 15 6
TSC 9 tsc --diagnostics 4 tsc --diagnostics -w
Webpack 20 webpack 4 webpack -w

In the end, we ended up using Webpack for builds, and driving it with Gulp. This cut our incremental build times in half. Now the bottleneck is TSC's incremental compilation.

@HerringtonDarkholme
Copy link
Contributor

HerringtonDarkholme commented Jan 19, 2017

My suggestion is to skip all the type checking in watch mode. You can generally configure this in webpack's loader by options like transpileOnly. And compiling error should be visible in editor so that spotting typos will not be hard during developing.

Then in full build type checking can be turned on again for quality assurance. Caveat: decorator's metadata requires type info so transpiling won't be enough.

@bcherny
Copy link
Author

bcherny commented Jan 19, 2017

@HerringtonDarkholme That's a really interesting idea! My fear is that without typechecking it will be annoying to catch issues where type changes break file/module interfaces (ie. export A's type changed, breaking module B when it imports A). I'll try it out and report back.

At a higher level, maybe you can help me understand why incremental checking takes so long. If I write some bad TypeScript, VSCode highlights the guilty code in red immediately (<1s). If this is what the "Check" step does during TS compilation, why does it take the TS compiler so long to complete that step, where VSCode does it so quickly?

My mental model is of code as an AST, with types attached to nodes. When a type changes due to a code change:

  • At best, just the immediate neighbors of that node in the AST need to be invalidated and re-checked
  • At worst, the paths from the changed node to each of its terminal nodes, and from the changed node to the root node need to be invalidated and re-checked

Is this an accurate representation of how TSC incremental compilation works? If so, why does it take so long (time seems to scale with number of types)? If not, how does it actually work? What is cached between incremental compilations, and what has to be recomputed?

@RyanCavanaugh
Copy link
Member

The problem is that declarations in one file can affect errors, and emit, in arbitrary other files:

// foo.ts
interface A {
  x: number;
}

// bar.ts
let x: A = { x: 100 };

If we change x: number; in foo.ts to x: string, there is a new error in bar.ts even though that file is unchanged. And there can be indirection here to arbitrary levels.

For emit:

// foo.ts
namespace X {
  // delete me
  export var y = 10;
}

// bar.ts
var y;
namespace X {
  let x = y;
}

When we delete export var y in foo.ts, we need to change the emit of bar.ts.

Fun fact: A previous implementation of TS tried to maintain a reverse mapping of dependencies such that given an invalidated node, you could determine which other nodes needed to be re-typechecked. This never worked 100%. I wrote a tool that iterated file edits and compared this "incremental" typecheck to the "clean" typecheck and produced a log file when the results were not the same; this tool never stopped finding bugs until we just gave up on the approach.

In an editor scenario, when a file is changed, we immediately typecheck the changed file and then check all the other files "later". This is usually faster than you can Ctrl-Tab to another file anyway, so it appears to be instantaneous. Commandline watch compilation doesn't keep track of which files have changed and which haven't.

If you are willing to get your entire project compiling under the isolatedModules switch, then you can safely wire up your build system to do a simple emit of only changed files, which should be practically instant, followed by a re-typecheck.

@bcherny
Copy link
Author

bcherny commented Jan 19, 2017

Fun fact: A previous implementation of TS tried to maintain a reverse mapping of dependencies such that given an invalidated node, you could determine which other nodes needed to be re-typechecked.

This sounds like the best possible solution! Least possible invalidation, least possible re-checking. The worst case is as bad as it is today, the best case is an order of magnitude speedup, and the average case is still a big speedup.

This never worked 100%. I wrote a tool that iterated file edits and compared this "incremental" typecheck to the "clean" typecheck and produced a log file when the results were not the same; this tool never stopped finding bugs until we just gave up on the approach.

That's a bummer, though I'm sure it was annoying to keep finding bugs for a piece of code that so many people relied on. Do you think you will ever revisit the approach? It seems like a huge win to have lightning fast incremental compiles, and a good way to win over JS devs used to fast compiles (and we could brag to our Scala and Haskell friends).

If you are willing to get your entire project compiling under the isolatedModules switch, then you can safely wire up your build system to do a simple emit of only changed files, which should be practically instant, followed by a re-typecheck.

After enabling compilerOptions.isolatedModules, and awesomeTypescriptLoaderOptions.useTranspileModule, incremental build times didn't seem to change much. This outside the domain of TSC, but do you know if there's another build flag I should enable?

Come to think of it, I don't really understand what isolatedModules is doing. From this comment it sounds like it's forgoing type checks between files (eg. in your 2 examples above, TSC would not complain) - is that right?

As @HerringtonDarkholme mentioned, awesomeTypescriptLoaderOptions.transpileOnly did give a significant speedup, with incremental build now taking ~500ms at the cost of watch mode type safety.

@RyanCavanaugh
Copy link
Member

Do you think you will ever revisit the approach?

Unlikely. It made non-incremental compilations much, much slower due to all the extra information that had to be generated and tracked. The rewrite that avoided this approach was 4x faster at non-incremental compilation -- basically, so fast that it could do a full typecheck in the time it took the old system to do an incremental typecheck.

It's also worth noting the current typechecker can "start anywhere" - it doesn't need to do a full walk to compute a type at an arbitrary location. So for example when you type a new expression into a file and hover over it, there isn't a full typecheck needed to compute the type of the expression; we can simply compute what's needed. So in that regard it's nearly the best-of-both-worlds, minus the odd case of wanting to get a complete set of program errors given a small change in one file.

isolatedModules by itself doesn't produce the speedup - it's that a compilation that succeeds under this flag can be safely transpiled by a tool like awesomeTypescriptLoaderOptions.transpileOnly because it's been verified to not have cross-file emit dependencies.

@ORESoftware
Copy link

ORESoftware commented Jan 20, 2017

is there a way to just skip type checking for incremental builds? Is there a command line flag for that? All I want for incremental builds is to transpile. Looking in the above comments, is --skipLibCheck the best option for this? I tried it and it seemed to speed up things up about 25% - 30%.

@normalser
Copy link

#10879 ?

@bcherny
Copy link
Author

bcherny commented Jan 20, 2017

@ORESoftware What are you using for bundling?

With Webpack, you need to set:

  1. In tsconfig.json#compilerOptions: "isolatedModules": true
  2. In tsconfig.json#awesomeTypescriptLoaderOptions: "transpileOnly": true, "useTranspileModule": true

@bcherny
Copy link
Author

bcherny commented Jan 20, 2017

Unlikely. It made non-incremental compilations much, much slower due to all the extra information that had to be generated and tracked. The rewrite that avoided this approach was 4x faster at non-incremental compilation -- basically, so fast that it could do a full typecheck in the time it took the old system to do an incremental typecheck.

I see. And keeping a reverse dependency graph only in incremental compile mode (but not in full compile mode) would probably be too many code paths to reasonably maintain.

It's also worth noting the current typechecker can "start anywhere" - it doesn't need to do a full walk to compute a type at an arbitrary location.

That's due to the laziness alluded to here right?

Thanks for all the help. I hope that with enough community feedback we can think of some other ways to improve incremental compile time. As a user, this is key for usability/productivity.

@hippich
Copy link

hippich commented Feb 1, 2017

Our company was also affected by slow incremental compilation times. We have really large code base and changing just one file would spent 45-60 seconds to recompile it.

Solution we end up with is running file watcher and whenever .ts file is changed - create temporal tsconfig.json just for this one change and compile that "project". This got us down to 10-15 seconds range (a lot of time is spent in grunt tasks, so actual tsc time is probably around 5-7 seconds)

Hope this might give some actionable idea for tsc developers.

@mhegazy
Copy link
Contributor

mhegazy commented Feb 1, 2017

Our company was also affected by slow incremental compilation times. We have really large code base and changing just one file would spent 45-60 seconds to recompile it.

45-60 seems like extremely long. can you share your tsc --diagnostics? also have you considered splitting your code base into multiple units?

@hippich
Copy link

hippich commented Feb 1, 2017

looks like time actually improved recently. Below are stats. And no, we can't split it at this time.

tsc --diagnostics

Files:            1557
Lines:          357854
Nodes:         2006063
Identifiers:    634113
Symbols:        433613
Types:          112325
Memory used:   822825K
I/O read:        0.32s
I/O write:       1.66s
Parse time:      7.15s
Bind time:       2.07s
Check time:     15.53s
Emit time:       8.67s
Total time:     33.43s

tsc -p tsconfig.onefile.json --diagnostics

Files:              23
Lines:           47378
Nodes:          170956
Identifiers:     67734
Symbols:        115011
Types:           19355
Memory used:   117586K
I/O read:        0.02s
I/O write:       0.00s
Parse time:      0.48s
Bind time:       0.37s
Check time:      1.56s
Emit time:       0.04s
Total time:      2.44s

tsc -w --diagnostics

Files:            1557
Lines:          357854
Nodes:         2006063
Identifiers:    634113
Symbols:        433613
Types:          112325
Memory used:   828437K
I/O read:        0.09s
I/O write:       1.60s
Parse time:      6.93s
Bind time:       2.24s
Check time:     14.13s
Emit time:       7.37s
Total time:     30.67s
6:29:37 PM - Compilation complete. Watching for file changes.
6:29:46 PM - File change detected. Starting incremental compilation...
Files:            1557
Lines:          357853
Nodes:         2006063
Identifiers:    634113
Symbols:        433613
Types:          112325
Memory used:  1012695K
I/O read:        0.00s
I/O write:       0.31s
Parse time:      3.70s
Bind time:       0.00s
Check time:     14.44s
Emit time:       7.63s
Total time:     25.77s
6:30:12 PM - Compilation complete. Watching for file changes.

@mhegazy
Copy link
Contributor

mhegazy commented Feb 1, 2017

this is much better :). have you considered, 1. splitting the project in multiple compilation units? 2. not checking for libraries with --skipLibcheck?

on a related but different note, is your code base written as modules?

@hippich
Copy link

hippich commented Feb 1, 2017

No, we are not going to split codebase since we already solved it using technique i described. so nothing to fix there.

I remember trying skiLibcheck but there were some issues with it. Again, not going to fix what already works :)

Yes, codebase is all modules which get compiled into AMD ones and optimized later with r.js

@mttcr
Copy link

mttcr commented Feb 1, 2017

We too have a big codebase (~5k files).
We already splitted the files in two parts: the first are compiled once, since they're generated, and the others are watched. Below are the diagnostics of the compilation. The whole application uses "namespace" (a bit like java packages: Every file is in a namespace, each file has only one class).

namespace mycompany.myapp.project
{
   export class MyClass extends ...
   {
   }
}

We had to put the max_old_space_size option for nodejs because the watch crashed after 3 builds otherwise.
Projects have some dependancies. Top folder depends of middle folder wich depends of root folder. I may build the root, generate the declarations (like I do for the generated ones that produce one big single d.ts file), compile then the middle, etc.

I already checked this github project so I guess the answer to the following question, but let's try again. Is there any tool to analyze the time spent in check and emit times? The codebase is "too big" to analyze each file by hand to look for complex generics types and other things.
One last question, what is the main difference with another object oriented language compiler like java, whose incremental build is instantaneous even with 50k files ? (our codebase)

Node version is 7.3.0.

Compiler options are

    {
        "target": "es5",
        "charset": "UTF8",
        "removeComments": false,
        "noImplicitAny": true,
        "suppressImplicitAnyIndexErrors": true,
        "noImplicitReturns": true,
        "noImplicitThis": true,
        "noUnusedLocals": true,
        "sourceMap": false,
        "diagnostics": false,
        "noLib": true,
        "skipLibCheck": true,
        "skipDefaultLibCheck": true,
        "noEmitHelpers": true,
        "alwaysStrict": true,
        "experimentalDecorators":true
    }

The generated files:

c:\..\nodejs\node --max_old_space_size=8192 ...../typescript/lib/tsc -p .\.vscode\tsconfigJava.json
Files:          2187
Lines:        162939
Nodes:        505592
Identifiers:  156502
Symbols:      105413
Types:         38288
Memory used: 327976K
I/O read:      0.31s
I/O write:     1.60s
Parse time:    1.41s
Bind time:     0.44s
Check time:    0.85s
Emit time:     4.71s
Total time:    7.41s

The other files (watched)

c:\...\nodejs\node --max_old_space_size=8192 ...../typescript/lib/tsc --watch -p .\.vscode\tsconfigLight.json
Files:           2255
Lines:         512388
Nodes:        2465911
Identifiers:   792544
Symbols:       680758
Types:         179865
Memory used: 1734512K
I/O read:       0.43s
I/O write:      2.00s
Parse time:     3.55s
Bind time:      1.45s
Check time:     8.45s
Emit time:      9.23s
Total time:    22.68s
08:25:25 - Compilation complete. Watching for file changes.
08:25:41 - File change detected. Starting incremental compilation...
Files:           2255
Lines:         512388
Nodes:        2465911
Identifiers:   792544
Symbols:       680758
Types:         179865
Memory used: 2842327K
I/O read:       0.00s
I/O write:      0.41s
Parse time:     0.09s
Bind time:      0.01s
Check time:     9.09s
Emit time:      6.98s
Total time:    16.17s
08:25:57 - Compilation complete. Watching for file changes.

@wclr
Copy link

wclr commented Apr 1, 2017

I too noticed a problem that for the same change compilations with different scopes differs much in check/emit time:
#14965

I think there definitely should be a room for TSC optimization in these terms.

@evmar
Copy link
Contributor

evmar commented Apr 11, 2017

Within Google we organize code into "libraries" (typically one per directory, each containing usually around 10 source files) and compile each library with a separate invocation of tsc.

Each library compilation generates d.ts files (using the --declaration flag to tsc). If library A depends on library B, we only need to recompile A if the d.ts files produced by B change, which means that changes internal to B (edits that don't change B's API) don't trigger rebuilds on downstream libraries.

We also use a persistent tsc process across compilations, which means even when just making changes inside a single library, the compiler only needs to reload the files you made changes to, and it can keep the other files cached.

We measure the compilation time experienced by developers across the company as they write code and with this system we find the median build time is currently just under one second. (If you change the API of a library low in the dependency stack, then all downstream libraries also rebuild, which is then multiple seconds.)

Unfortunately the mechanism for organizing this is all within https://bazel.build/, which is incompatible with webpack gulp etc. We are working to open source all of this because we'd like to use it with our open source TypeScript projects such as Angular. @alexeagle

trotyl added a commit to trotyl/issue-tracker that referenced this issue Apr 15, 2017
@mhegazy mhegazy closed this as completed May 1, 2017
@bcherny
Copy link
Author

bcherny commented May 1, 2017

@mhegazy The underlying issue of slow incremental checks is not fixed - can we keep it open please.

@mhegazy
Copy link
Contributor

mhegazy commented May 1, 2017

#10879 tracks updating --watch implementation.

@bcherny
Copy link
Author

bcherny commented May 1, 2017

Awesome- thanks as always @mhegazy!

@dgoldstein0
Copy link

@evmar I was literally just exploring the exact same setup for incremental typechecking at Dropbox - we're also using bazel, with a similar sounding setup, but haven't gotten our typechecking story straight yet. Very interested to hear that it has worked well for you. Are there any particular difficulties you had getting that setup working and so performant?

@alexeagle
Copy link
Contributor

I can't think of any difficulties you'd run into (our monorepo layout inside Google makes some things hard, as we have no node_modules directory, but presumably we're the only ones doing that.)
The biggest hurdle is writing the BUILD files. The information they contain is mostly derivable from the TypeScript sources, so we have a tool we'd like to open source which updates the BUILD files automatically.
https://github.com/bazelbuild/rules_typescript is our implementation, it's early but working. You should be able to get <1s roundtrip for any source change even in a large program (provided your libraries are broken up into sufficiently small ones) If you try it, please file issues even for things like bad error reporting.

@navels
Copy link

navels commented Sep 5, 2017

@alexeagle please open-source that tool :-)

@alexeagle
Copy link
Contributor

@navels filed https://github.com/bazelbuild/rules_typescript/issues/29 so you can track it

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Question An issue which isn't directly actionable in code
Projects
None yet
Development

No branches or pull requests