-
Notifications
You must be signed in to change notification settings - Fork 4
Conversation
cc @mcollina |
Can you do a flamegraph of that benchmark? The constructor of bl initialize a readable stream, and that is not cheap. You might want to send a PR to BL to move to a lazy-initialization, like we do in core for some modules: https://github.com/nodejs/node/blob/master/lib/internal/streams/lazy_transform.js. Alternative, introduce a |
@mcollina download it and take the .txt extension off: flamegraph.html.txt |
Just checked, all is well and Readable initialization is not an issue. I concur that using |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
It didn't seem that way to me either.
I hacked in support for Buffer-like objects to protons quickly locally so we can use Like this |
be92a5a
to
de1ca0e
Compare
ef5a7b6
to
d0c786e
Compare
Swaps out
pull-block
forbl
bl.shallowSlice
in thethis.queue
calls but it returns instances ofBufferList
instead ofBuffer
. These failBuffer.isBuffer
which is used by theprotons
module further down the pull-stream pipeline so until we refactor or replace we can't use it. Thankfully we have forked protons so can improve it, though in WIP: new ipld format api ipld/js-ipld-dag-pb#105 @vmx has started to usepbf
instead ofprotons
so we'll have to examine the performance impact of that change.bl.consume
to remove the bytes from the front of the buffer list, or just create a new buffer list and slice in the bytes we've yet to consume - from my testing, creating a new buffer list was faster so that's what this PR does. Usingbl.consume
was a little slower than usingpull-block
.Results (lower is better):
bl.consume
was about 1% slower thanpull-block
bl.shallowSlice
ing to remove consumed data was about 12% faster than the currentpull-block
based implementationIt uses the test from #11