-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Don't send requests with Connection:keep-alive if we have no agent. #488
Conversation
Avoids unhandleable ECONNRESETs.
Hmm. I can pretty easily make a test which shows that without this PR, sockets are leaked. Just something as basic as var http = require('http');
var server = http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.write("Thanks for waiting!");
res.end();
});
server.listen(4000); var caronte = require('./lib/caronte');
var proxy = caronte.createProxyServer({target: 'http://127.0.0.1:4000'});
proxy.ee.on('caronte:outgoing:web:error', function (e) {
console.log('handling error', e);
});
proxy.listen(4010); pointing a web browser at localhost:4010, and running Unfortunately I have not been able to make a simple reproduction of the ECONNRESET error without the full complexity of Meteor. Also, my PR ends up (unless you specify an Agent) turning all connections into connection: close, which probably is a little too strong. I mean, without an Agent the proxy-target communication should be connection: close, but there's no reason that the client-proxy communication should be too (but my PR makes it be that way). |
The problem with using an agent is that they are impossibly slow. Only disabling the agent gives you any sort of performance. In my opinion disabling client to proxy keep-alive is totally unacceptable, especially when paired with HTTPS. The handshake can take up to 150ms during my tests so basically any REST service would be impossibly slow . |
Do we have an understanding of what aspect of using an agent is slow? Am I confused if I think that without an agent, the proxy to server connection is just going to be leaked and never reused? |
hey @glasser, could you do it since you uncovered this bug and know more about it? if not, i can handle this thanks for submitting the PR anyway :) regarding @RushPL 's comment i think he's referring to the performance that |
I am actually referring to a situation where many outgoing connections need 2013/9/24 yawnt notifications@github.com
|
oh yes, in that case you can just increase the pool size to an absurd number :P |
And hence no need for pool. :-) 2013/9/24 yawnt notifications@github.com
|
not really, if you leave without pools at one point you're gonna finish up file descriptors and the node process will crash.. with a pool you can have mostly the same perf, but without the horrible crash when you're done with FDs |
I agree that this is sort of brutal. But right now, the most obvious way to create a proxy without any options defaults to no agent. So in the commonish case where the client is sending Connection:keep-alive, bam, your proxy leaks sockets forever, which will certainly crash eventually. (And in my app will probably lead to ECONNRESETs eventually, though I can't reproduce this in a small example.) Maybe caronte should default to using an agent (perhaps a large one)? I'm not sure exactly what bug to report to joyent/node. Basically, the issue is this: if, when using |
Oops, hit send too soon. ... but then there's the valid argument of "ok, well, don't specify two conflicting options". The problem is that So what should I suggest node does? Maybe it shouldn't set Or maybe specifying both of those options should throw an error. That's reasonable, except then caronte has to still make a change, either the one from this PR, or to stop supporting |
Another potential caronte-side fix: if you're doing |
ok so i had a talk with a node core contrib about this.. keep alive connections should only be used, as you said, when there's an agent.. so i think the best way is to make sure that
can you add the "restore keep-alive" part? when that's done (with tests :D) i'm willing to merge this thanks! |
I can look into this, but due to some travel it definitely won't be this week. Already stayed up until 3am on this issue once this week :) |
no prob, i'll look into it myself ASAP (aka when i'm done with the other issue) |
I'm noticing this behavior too. It can DOS a backend server pretty quick actually; lsof shows my backend hanging up immediately on new connections once there are 216 outstanding keepalive connections from the proxy server, because of course I've hit the default limit of 256 file descriptors on my Mac. Naturally that can be raised but eventually there's a limit in whatever context. Eventually these connections time out and the back end is usable again. I'm curious why node's core http module doesn't have a tuneable parameter for breaking connections based on how long it's been since the last request. That would at least keep well intentioned goofs like leftover keepalive connections from DOSing the server. Without doing something hamfisted like breaking an incomplete HTTP upload or other long-lived single request, which is a completely different decision. I think one could write a watchdog in a few lines in userspace: when you see a new req.connection, add a close event handler, and also watch for additional requests with the same connection. If it hasn't been closed or had a new request come in for a few seconds, just close it... But is that necessary or am I missing a piece of existing functionality? Apologies for getting a little off the beam of the original issue with node-http-proxy. It's a bug for sure, bad enough behavior that a lot of servers would probably just firewall the responsible IP (: |
The problem seems entirely resolved by adding the default agent: var proxy = httpProxy.createProxyServer({ agent: http.globalAgent }); I did see @RushPL's comment that the use of any agent at all is a performance-killer. I haven't attempted any measurements of my that, but leaking sockets and not really leveraging keepalive as a result can't be good either. |
@boutell Yes, it would be great if Node's http server had a way to flip the socket timeout value between one value for "during a request" and one for "between requests". Or a more usable event for "socket is now awaiting another request". We did something in Meteor recently to try to simulate this but it's hacky: https://github.com/meteor/meteor/blob/devel/packages/webapp/webapp_server.js#L209 https://github.com/meteor/meteor/blob/devel/packages/webapp/webapp_server.js#L448 |
Nice, this is what I was suggesting... I think you could build that as a On Sun, Dec 8, 2013 at 5:37 PM, David Glasser notifications@github.comwrote:
Tom Boutell |
hey, |
So it is possible with 0.11 to run without agent and do keep alive to the target? How would I go about testing it? |
I ran a test with:
and proxying with:
Seems that it works and does not leak (checked with lsof).. unless |
i also tried completely without specifying an agent and it didn't leak as well.. waiting to hear from @glasser |
It seems that this fixes #579 |
behavior is inconsistent, i accept this as a patch until Fixed in 89a22bc |
This seems to be breaking Upgrades (#638). It forces the Connection header to 'close', but on a Socket Upgrade, we don't have an agent, yet. Perhaps, instead it should check if there is no agent AND the connection is not set to 'upgrade'. |
@jayharris id buy that. It seems that in that case it shouldnt leak sockets as it will only be making one request of that nature. I haven't run into this problem personally since we use an agent in our proxy that is handling sockets. I'd accept that patch. |
Awesome. I'll get it submitted in a few hours. |
Avoids unhandleable ECONNRESETs.
I think this is arguably a Node http bug (in 0.10.18 at least), but: if you run
During the
ClientRequest
constructor,self.shouldKeepAlive
is set to false because there is no agent. But then it calls (indirectly) thestoreHeader
function (inhttp.js
, which setsself.shouldKeepAlive
to true because you specified the keep-alive header.Then once the request is over
responseOnEnd
(inhttp.js
) runs. BecauseshouldKeepAlive
is true, we do NOT destroy the underlying socket. But we do remove its error listener. However, because we do NOT have an agent, nobody cares that we emitfree
, and the socket is essentially leaked.That is, we continue to have an open socket to the target server, which has no
error
listener and which will never be cleaned up.It's bad enough that this is a resource leak. But to make matters worse: let's say that the target server dies. This socket will emit an ECONNRESET error... and since there is no
error
listener on the socket (there's a listener on theClientRequest
! but not on the socket), bam, time for an incredibly confusing error to be thrown from the top level, probably crashing your process or triggeringuncaughtException
.I think the best workaround here is to ensure that if we have no agent, then we don't send connection: keep-alive. This PR is one implementation of this.
(I'll work around this a different way in Meteor for now: by providing an explicit Agent.)