You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For the last two weeks I spent quite some time on profiling deno_tcp.ts. I can't really figure it out on my own and hit a lot of problems along the way so I'm sharing my findings to get more eyes on it.
First I generated some flamegraphs to see what's going on. I used flamegraph which is installable via cargo.
It seems one of the main bottlenecks is garbage collection by V8 in the bottom right hand side. It's "Scavenger" process that reclaims memory and it constitutes roughly 6% of whole graph. deno_core doesn't suffer from same problem.
Checking out GC logs from V8 confirms this suspicion.
Here we can clearly see that running deno_tcp.ts V8 spents ~12% of time in __kernelrpc_vm_remap. In case of deno_core_http_bench it is only 0.45% of time.
Looking for problem
By using --trace-gc-object-stats we can see what objects are being collected:
It suggests that we've got a lot of closures and heap snapshot confirms that - there's a lot of "contexts" and "function types" in growing heap.
That's how far I've got, I want to get more eyes on this as I hit the wall for now. I tried using llnode but unfortunately it's lagging a lot in supported V8 version (we're using 7.7.200, llnode doesn't support 7.*.*).
For the last two weeks I spent quite some time on profiling
deno_tcp.ts
. I can't really figure it out on my own and hit a lot of problems along the way so I'm sharing my findings to get more eyes on it.First I generated some flamegraphs to see what's going on. I used flamegraph which is installable via
cargo
.Wrk command used for benchmarking:
deno core:
deno_core.svg
deno_tcp:
deno_tcp.svg
It seems one of the main bottlenecks is garbage collection by V8 in the bottom right hand side. It's "Scavenger" process that reclaims memory and it constitutes roughly 6% of whole graph.
deno_core
doesn't suffer from same problem.Checking out GC logs from V8 confirms this suspicion.
deno core:
deno_tcp:
Profiling data
I generated some profiling data using following commands:
Then generate profiling file from V8 log:
Open profview in your browser, it's located in V8 tools directory
Processed files:
deno_core_http_bench
deno_tcp.ts
Here we can clearly see that running
deno_tcp.ts
V8 spents~12%
of time in__kernelrpc_vm_remap
. In case ofdeno_core_http_bench
it is only0.45%
of time.Looking for problem
By using
--trace-gc-object-stats
we can see what objects are being collected:Open it in: https://mlippautz.github.io/v8-heap-stats/
You can see this:
I found some presentation that explains V8's GC logs:
https://www.slideshare.net/NodejsFoundation/are-your-v8-garbage-collection-logs-speaking-to-youjoyee-cheung-alibaba-cloudalibaba-group
It suggests that we've got a lot of closures and heap snapshot confirms that - there's a lot of "contexts" and "function types" in growing heap.
That's how far I've got, I want to get more eyes on this as I hit the wall for now. I tried using llnode but unfortunately it's lagging a lot in supported V8 version (we're using 7.7.200, llnode doesn't support 7.*.*).
All files are available in this gist: https://gist.github.com/bartlomieju/a341a36e9ef4ba04bf0357641be736b2
CC @ry @kevinkassimo @piscisaureus @afinch7 @kitsonk
The text was updated successfully, but these errors were encountered: