Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

@zephraph => [Node] Patch update to v10.19 for critical security fix #5022

Merged
merged 2 commits into from
Feb 7, 2020

Conversation

mzikherman
Copy link
Contributor

Seems easy enough (on paper)! Let's see what Circle/Docker thinks of it (seems to be fine locally).

@zephraph
Copy link
Contributor

zephraph commented Feb 7, 2020

cc @joeyAghion

I'm fairly certain that we're going to experience the http timing issue that caused us to rollback node updates last time if we merge this. The catch 22 is that we're also vulnerable.

@mzikherman
Copy link
Contributor Author

Followed up with a recommended fix from nodejs/node#27363

I somewhat arbitrarily set the following values:

  • ELB idle connection timeout: 90 seconds
  • app server keepAliveTimeout: 95 seconds
  • app server headersTimeout: 105 seconds (should this be the above value + expected Force response time?)

We can merge and possibly try this on staging. I believe the ELB updates should be rolled out automatically on a deploy to staging (or one can via hokusai staging update), but I'm not 100% sure.

@joeyAghion
Copy link
Contributor

Why do this rather than the v12 upgrade that's in an open PR already? (I assume they both would contain the necessary security patch.)

Either way, we'll need to monitor for the connection issue in staging. (And maybe even hammer staging a little proactively.)

Might be helpful to run just the upgrade in staging first. The risk of adjusting timeouts in anticipation of the behavior is that it can introduce new problems between ELB<->nginx and nginx<->express, and dropped connections around deploys or scale-ups/downs.

@mzikherman
Copy link
Contributor Author

@joeyAghion well, this PR might be ready to go (🍏), and gets us a security fix in addition to being a good place to test this connection issue. Nothing stopping us from merging this and still working on the v12 branch.

If you look at that in-progress v12 PR, it is ❌ and there are C-level stack traces on yarn install. There's other things in flight (orbs), and that PR is generally not 'close' to being mergeable.

Whereas this one is I think. So why not try it out? It's Friday, Force is freshly deployed with nothing in the pipeline, etc.

@joeyAghion
Copy link
Contributor

That works for me assuming someone will monitor in staging and eventually production, or block production releases in the meantime.

@mzikherman
Copy link
Contributor Author

Cool, let's try it. We can always revert.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants