Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Use compiler services instead of language services #693

Closed
wants to merge 2 commits into from

Conversation

kitsonk
Copy link
Contributor

@kitsonk kitsonk commented Sep 6, 2018

This is a work in progress to investigate using ts.createProgram() instead of using the language services to transpile each dependency. ts.createProgram() works much more like the way tsc out of the box works, making it easier to generate an AMD bundle of modules in the future. The language services are more designed to be leveraged in an editor, where incremental changes are transpiled as the file is edited.

At the moment, it is purely additional APIs with compileProgram() and runProgram() replacing compile() and run() in the compiler. A few of the compiler unit tests are broken now, so they are commented out, but the rest of the test mostly work (except some of the errors functional cases because of the changes in the error stacks).

Currently, the process is like this:

  • Create language service host
  • To transpile a file, request the file to be emitted from the language service host
  • TypeScript will go gather up all the information, requesting resolution of different modules recursively until all the necessary code is loaded and transpiled.
  • The deno compiler then does a two pass gathering of dependencies and then eval'ing the transpiled code.

With this PR, it is like this:

  • Create a program with the root file being the file passed as the argument
  • Request the program be emitted from TypeScript
  • TypeScript requests the compiler host to read files
  • TypeScript requests the compiler host to "write files" which are the transpiled code, which we add to the modules
  • The deno compiler then does a two pass gathering of depenedencies and then eval'ing the transpiled code.

@kitsonk
Copy link
Contributor Author

kitsonk commented Sep 6, 2018

For information:

[kkelly@AUkkelly ~/github/deno (master)]$ time ./out/debug/deno tests/005_more_imports.ts --reload
Hello

real    0m0.192s
user    0m0.114s
sys     0m0.061s

[kkelly@AUkkelly ~/github/deno (transpile-project)]$ time ./out/debug/deno tests/005_more_imports.ts --reload
Hello

real    0m5.435s
user    0m8.149s
sys     0m0.269s

So there is a serious slowdown using this method... I need to investigate what is going on here, and if there is pre-warming that can be done to the compiler. I believe the way the language services works, when we create the instance of it in the compiler during construction, causes a lot of pre-warming of TypeScript, which is now missing, because we don't really hit TypeScript until invocation.

What might work, which I was thinking about overnight, is that we just use the ts.createProgram() for generating the bundles, and use the recursive single file emit for normal running.

@qti3e
Copy link
Contributor

qti3e commented Sep 6, 2018

@kitsonk I was experiencing the same thing with ts.createProgram() a while ago when I was working on the gendoc, it appeared to me that this method is not efficient enough for the runtime (deno file.ts), but it provides the best APIs for things like deno bundle, deno test and other utility functions of deno, also language service API is the right approach for deno doc in my opinion

@kitsonk
Copy link
Contributor Author

kitsonk commented Sep 6, 2018

@qti3e yeah, thanks for the feedback... so what I will think I will do is make the DenoCompiler provide a clean abstraction for both ways, so we can do "compile program" which would just return a bundle of code which would use the ".createProgram()" but standard stuff would still run with the single file emit. We also likely need to have a public API that says "here is an AMD bundle, can you please eval this."

@kitsonk kitsonk mentioned this pull request Sep 6, 2018
@ry
Copy link
Member

ry commented Sep 7, 2018

It's odd to me that it'd be so much slower. What's createProgram doing that languageService isn't?

One idea is to use the V8 profiler to figure this out: https://github.com/v8/v8/wiki/V8-Profiler

@kitsonk
Copy link
Contributor Author

kitsonk commented Sep 8, 2018

@ry any bright ideas about how to do that, asking for a friend... I can play with d8, I can also just recreate something simple in Node.js and dump a profile from there as well.

@kitsonk
Copy link
Contributor Author

kitsonk commented Sep 8, 2018

Ok, I re-created enough of the compiler on Node.js to be able to profile the two different approaches. It is very confusing to say the least. It seems the createProgram would scale better with larger projects, and I need to investigate if there is something in the current implementation that is killing performance. I wouldn't be surprised because the way the API were access in my Node.js hack were different, therefore might be confusing the compiler. The real root cause though is I suspect that the emitFile cheats for performance reasons, in that a "reasonably accurate" type check is sufficient, knowing that a compiler will eventually do a full type check.

Concerns around the compiler performance are covered in microsoft/TypeScript#25658 which is also another option in that in transpiling we could --skipDefaultLibCheck which I suspect is similar to what might be being done with the language services. I highly suspect this, because when I started using the compiler services, it found a couple more errors in the global lib that the "emitFile" was just accepting.

There is a "risk" that someone could be conflicting with a symbol in the deno global environment with --skipDefaultLibCheck, but it might be "worth" the performance.

Here are some interesting bits:

@ry
Copy link
Member

ry commented Sep 8, 2018

@kitsonk You should be able to use --prof and other V8 flags in deno currently.

@kitsonk
Copy link
Contributor Author

kitsonk commented Sep 8, 2018

Hmmm... the --skipDefaultLibCheck doesn't seem to make a bit of difference for performance. The other thing I am noticing is that the program actually execute really quickly... at least perceptibly the same for me, but what happens after execution, deno seems to "hang" for those few seconds, like maybe it has itself caught up in a big GC, which is ultimately useless, because you are exiting anyways.

@kitsonk
Copy link
Contributor Author

kitsonk commented Sep 8, 2018

Ok, I got profiling working directly in deno... They are strange beasts. Here are the tick profiles for both master and the working copy of the branch I have here.

Given:

./out/debug/deno --prof tests/005_more_imports.ts --reload

This branch, summary:

 [Summary]:
   ticks  total  nonlib   name
   1208   41.6%   43.5%  JavaScript
   1546   53.3%   55.6%  C++
     57    2.0%    2.1%  GC
    124    4.3%          Shared libraries
     25    0.9%          Unaccounted

And then the current master summary:

 [Summary]:
   ticks  total  nonlib   name
   1023   39.3%   41.3%  JavaScript
   1406   54.1%   56.7%  C++
     66    2.5%    2.7%  GC
    121    4.7%          Shared libraries
     51    2.0%          Unaccounted

The bottom up, which shows the two are quite different beasts, first this branch:

   ticks parent  name
    633   21.8%  v8::internal::CheckObjectType(v8::internal::Object*, v8::internal::Smi*, v8::internal::String*)
     43    6.8%    LazyCompile: ~scan gen/bundle/main.js:15973:24
     17   39.5%      LazyCompile: *nextToken gen/bundle/main.js:25220:29
     15   88.2%        LazyCompile: ~parseSemicolon gen/bundle/main.js:25347:34
     15  100.0%          LazyCompile: ~parseTypeMemberSemicolon gen/bundle/main.js:26473:44
     15  100.0%            LazyCompile: ~parsePropertyOrMethodSignature gen/bundle/main.js:26551:50
      1    5.9%        LazyCompile: ~parseExpected gen/bundle/main.js:25300:33
      1  100.0%          LazyCompile: ~parseTypeAliasDeclaration gen/bundle/main.js:29463:45
      1  100.0%            LazyCompile: ~parseDeclarationWorker gen/bundle/main.js:28995:42
      1    5.9%        LazyCompile: ~createIdentifier gen/bundle/main.js:25423:36
      1  100.0%          LazyCompile: ~parseIdentifierName gen/bundle/main.js:25442:39
      1  100.0%            LazyCompile: ~parseEntityName gen/bundle/main.js:26106:35

And current master:

   ticks parent  name
    523   20.1%  v8::internal::CheckObjectType(v8::internal::Object*, v8::internal::Smi*, v8::internal::String*)
     61   11.7%    LazyCompile: ~forEachChild gen/bundle/main.js:24276:28
     25   41.0%      LazyCompile: *bindChildrenWorker gen/bundle/main.js:35223:38
     13   52.0%        LazyCompile: ~bindChildren gen/bundle/main.js:35178:32
     10   76.9%          LazyCompile: ~bindContainer gen/bundle/main.js:35077:33
      6   60.0%            LazyCompile: *bind gen/bundle/main.js:36412:24
      4   40.0%            LazyCompile: ~bind gen/bundle/main.js:36412:24
      3   23.1%          LazyCompile: ~bind gen/bundle/main.js:36412:24
      2   66.7%            LazyCompile: *visitNodes gen/bundle/main.js:24242:26
      1   33.3%            LazyCompile: ~forEach gen/bundle/main.js:9326:23
     12   48.0%        LazyCompile: *bind gen/bundle/main.js:36412:24
      8   66.7%          LazyCompile: *visitNode gen/bundle/main.js:24239:25
      8  100.0%            LazyCompile: ~forEachChild gen/bundle/main.js:24276:28
      3   25.0%          LazyCompile: *forEach gen/bundle/main.js:9326:23
      3  100.0%            LazyCompile: ~bindEach gen/bundle/main.js:35199:28
      1    8.3%          LazyCompile: *visitNodes gen/bundle/main.js:24242:26
      1  100.0%            LazyCompile: ~forEachChild gen/bundle/main.js:24276:28

@ry
Copy link
Member

ry commented Sep 8, 2018

@kitsonk Nice work! 2601 ticks maybe isn't enough to really say what's happening - is there a way we can test a similar work load in a loop? Hm would be nice if there was some easy way of source mapping those filenames too...
The top two

     65    2.5%    2.6%  LazyCompile: ~scan gen/bundle/main.js:15973:24
     63    2.4%    2.5%  LazyCompile: ~forEachChild gen/bundle/main.js:24276:28

seem to be inside the typescript compiler - does LazyCompile suggest V8 warm up would improve this? Maybe @piscisaureus knows more?

@kitsonk
Copy link
Contributor Author

kitsonk commented Sep 10, 2018

I tried warming up more of TypeScript by doing a createProgram() in the code and holding on a reference to it during the bundle. It doesn't seem to have helped, and if anything increased the ticks. It certainly impacted how the code is being used.

The bottom up looks quite a bit more like the master branch, but time wise, it is a crazy difference in duration, it is still looks like this:

$ time ./out/debug/deno tests/005_more_imports.ts --reload
Hello

real    0m4.048s
user    0m7.561s
sys     0m0.197s

Anyways, I am going to push my branch. But I think the best bet at the moment is to work on keeping createProgram as a seperate API used for creating bundles. There is so much difference in the performance and no obvious way to mitigate it.

@ry
Copy link
Member

ry commented Sep 24, 2018

Closing this one as createProgram is too slow. I think this will be useful code in the future when we address bundling. Thanks for investigating @kitsonk !

@ry ry closed this Sep 24, 2018
@kitsonk kitsonk deleted the transpile-project branch August 2, 2022 04:42
hardfist pushed a commit to hardfist/deno that referenced this pull request Aug 7, 2024
Required for  denoland#23305

Allowing cloning `Rc<ContextState>` into new contexts. 

During cleanup we assert that main context does not have more than one
strong count to `JsRuntimeState` but this may not be true when there are
more than one contexts that may not have been GC'ed yet. Hence,
JsRuntimeState is now stored as a pointer to OpCtx as it is guarnateed
to live longer than ops.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants