Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Streams Regression in v4.x #112

Closed
MylesBorins opened this issue Jun 13, 2016 · 8 comments
Closed

Streams Regression in v4.x #112

MylesBorins opened this issue Jun 13, 2016 · 8 comments
Labels

Comments

@MylesBorins
Copy link
Contributor

MylesBorins commented Jun 13, 2016

We landed nodejs/node#6023 to fix nodejs/node#5820

This has created a regression that has had two different issues opened

nodejs/node#7159
nodejs/node#7278

There is a potential fix, but it has not yet lived in a release, and may not covered one of the edge cases mentioned in nodejs/node#7278

I think that it might be wise to revert nodejs/node#6023 in the next release.

@addaleax
Copy link
Member

It might be worth noting that nodejs/node#7278 is distinct (although not unrelated) from nodejs/node#7159, will need its own bugfix and also existed in all Node.js versions prior to 4.2.0.

@addaleax
Copy link
Member

addaleax commented Jun 13, 2016

As promised, a bit of writeup of the issues/PRs involved in this situation, as a general overview and also for myself to keep track of things (sorry if it’s a bit messy/unreadable/overlong, I’m tired and going to sleep now. 😄):

awaitDrain background

When using pipes in Node, there is a mechanism for handling backpressure: When one of the destination streams has a buffer that currently exceeds its highWaterMark, e.g. because it consumes the data slower than it is being produced, calling .write() on it will return false. Once this buffer is emptied and all data has been consumed in the destination stream, the 'drain' event is emitted.

The readable stream (where pipe is implemented) acts upon such a false write response by increasing a counter, awaitDrain, that represents the number of piping destinations that currently have full buffers. Once a 'drain' event on any of the pipes is emitted, this counter is decreased again, and once it hits 0, data is written to the destination streams again.

This should imply that 0 <= awaitDrain <= pipesCount always holds (it currently does not).

It also means that occasionally underestimating awaitDrain is unproblematic and only affects performance – if piping is continued too early, the destination streams will indicate having full buffers and the readable stream is paused again.

Issues/PRs Group 1
Issues/PRs Group 2
Issues/PRs Group 3

@rvagg
Copy link
Member

rvagg commented Jun 14, 2016

Beautifully illustrates the problems we have with Streams in core, an endless game of wac-a-mole where most people eventually decide that it's too risky to touch anything so stay right away from it.

tbh I can't spare enough brain space to get fully around this issue to express much of an opinion on the best way forward: wait, backport, revert — each of these has risks so someone has to make a call on which is the least risky path. I would prefer if @addaleax and @thealphanerd could come up with a shared perspective on an appropriate path forward for LTS and tell the rest of us what to think!

@mcollina
Copy link
Member

A couple of notes:

  1. we should do a full post-mortem on this, and put failsafes in place to avoid this in the future
  2. the situation is really complex, and I agree we should do nothing until we agree on a course of action
  3. the bug stream: only increase awaitDrain once for each pipe destination node#7292 is "fixed" from readable-stream@2.0.4 to readable.stream@2.1.2, where 2.0.3 and 2.1.3 are broken.

The situation is actually made worse by readable-stream, we should push out a release there soon (we can even test there). We should use that to test this, I'll look for a module that was broken by this.

@addaleax
Copy link
Member

The fixes for the bugs listed here have lived in 6.2.2 for almost a week, and I’d feel comfortable seeing them included in one of the next v4 releases.

If they don’t, reverting nodejs/node#6023 temporarily should also be okay – the problem it solved is, for the most part, a performance issue, so that should be safe, too.

I don’t know how exactly this interacts with the security releases; the decision is yours (@rvagg’s? @thealphanerd’s?) anyway, but both options seem better than “just waiting” to me.

@MylesBorins
Copy link
Contributor Author

I've landed the fixes from nodejs/node#7292 and nodejs/node#7160 in v4.x-staging as nodejs/node@47f82cd...0dae3ca

The plan right now is to release the security update tomorrow as v4.4.6 and have a patch release v4.4.7 next Tuesday.

@MylesBorins
Copy link
Contributor Author

fixed in v4.4.7

@addaleax
Copy link
Member

addaleax commented Jul 4, 2016

@mcollina I don’t know, what’s the process for getting the fixes here out in a readable-stream release?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants