-
Notifications
You must be signed in to change notification settings - Fork 404
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: This socket is closed. #76
Comments
Thanks for the report! Just out of curiosity, what happens if you take New Relic out of your app? (If you're using the New Relic API, you can disable the module by setting Also, what versions of node, mongoose, and the New Relic module are you running? |
Seems to be a Heisenbug...:frowning: I've run three Do you know if the above (using node 0.10.21 |
I think that when you use |
Why would newrelic use my http server for reporting to the API? It's a |
They share a socket from the socket pool, I think. |
I see. Maybe someone can shed some light on it. |
Outbound requests use Agents, inbound requests are IncomingMessages, and neither of them use a socket pool. Using As far as using I'm not ruling out our code from being in the mix here, but it's seeming somewhat unlikely right now. Let us know what you discover. |
Heroku will send a I've never heard of Bunyan before. Using Edit: Given that it didn't happen again so far, let's assume that my code change regarding |
Cool. Just double-checking. I think it's possIble that occasionally you have some connections to MongoDB that are hanging around and the |
One more thing: I was trying to wrap my head around this stack trace (maybe it's weird due to the newrelic instrumentation): There are two options: Either it was an error inside the mongo driver (connection issues etc.), which was then logged using bunyan or it was an error while logging something. Assuming it was an error inside the mongo driver (what you do), why does I'm kind of confused...looking at the bunyan src it's using The socket that is closed is STDOUTAt least that's what I think...I don't know. What else should it be? |
I'm just guessing here: Heroku pipes the output of my app to sth. else, in order to process the logs. This means Now when the dyno gets restarted, heroku closes the pipe. And BANG, the socket is closed. Edit: I almost created an upstream issue, but https://github.com/trentm/node-bunyan#stream-errors . So it's up to newrelic to handle the stream error. Edit2: I created an upstream issue, since I just got even more confused trentm/node-bunyan#108 |
This sounds plausible, but that raises some questions (like, how is the stacktrace getting back to us?). @zeke, is this something that the Node buildpacks do? One thing to keep in mind is that sometimes stacktraces get confused in the V8 side of things, especially when there's as much monkeypatching involved as there is with New Relic. So it's possible (just on the possible side of likely) that one of bunyan or mongoose is showing up on those stacktraces without actually being involved. |
It's not a Heroku node buildpack thing, but maybe @tsloughter can shed some light on the Heroku request lifecycle stuff. |
That's a very good point indeed. I was watching the stacktrace with |
It just happened again as I pushed a new update. The very first exception was this
followed by a ton of This aligns perfectly with what I observed in trentm/node-bunyan#108 |
I'm trying to digest all of these posts. A log is going to tail and logentries but not New Relic? |
Yes sort of, but that's not what this issue is about (it could be that tail and logentries are reading from Basically what this issue is about: the newrelic module does not handle the |
Heya Tristan, thanks for taking a look at this. As Alexander mentions, New Relic's node instrumentation is using Bunyan, which is causing a bunch of error messages to be printed out on shutdown because it's trying to write to a closed socket. I'm trying to figure out whether there's some kind of weird monkeypatching issue causing these messages (i.e. this is a Bunyan bug that New Relic needs to be working around, or maybe even a bug in the New Relic module code itself), or if this is due to stdout having been redirected to a named pipe or socket that's been closed by Heroku or foreman because the process is going down. According to @Prinzhorn this only happens intermittently, and this is something we haven't seen ourselves at New Relic. Can you shed any light on this? |
@othiym23 Did you take a look at trentm/node-bunyan#108 ? You'll find everything to reproduce it. Basically you need to attach a |
I disagree; the New Relic module doesn't open the underlying handle, except passively by using Bunyan's exported interface, and likewise New Relic never closes the handle – we assume that bunyan will continue to log output throughout the life of the process. It is a genuine error if Bunyan can't write to its log, and the correct thing to do at that point is to let it throw to the top-level. We make every effort to make the module safe and unobtrusive, but this is a case in which the module's environment is not in a sane state, and crashing is the best thing to do. Remember that you're only encountering this during shutdown. |
As you said it only happens during shutdown. I'm probably gonna leave everything as it is now. I don't feel 100% comfortable though. Basically this keeps the dyno from shutting down gracefully. If I wouldn't keep the process running after catching the |
I agree that it's not the best solution, which is why I want to understand what's happening on the Heroku end. This is a thorny edge case:
Given those constraints, is there something you think New Relic could be doing better? |
From the top of my head I really don't know. I was hoping someone to come up with an elegant solution. The only thing that bugs me is that I may end up with hundreds of logentries of type "Exception" every day and I need to figure out if any of them is valid. As you said, listening for Maybe I'll try the following: listening for the process.stdout.once('close', function() {
process.stdout.on('error', function() {
//Ignore the error event, since the stream is closed.
});
});
process.stderr.once('close', function() {
process.stderr.on('error', function() {
//Ignore the error event, since the stream is closed.
});
}); Listening for the Does this look like a node core issue? Here's a test case:
setInterval(function() {
console.info('hi');
}, 500);
process.stdout.once('close', function() {
console.error('close');
});
process.stdout.on('error', function(err) {
console.error(err);
}); Now when closing
I would expect the close event to be the first one to fire. |
I've opened a core issue nodejs/node-v0.x-archive#6452 Sorry for spamming so much in here :-) |
I've pushed this into production, it's a workaround for now (Bunyan should check if the stream is destroyed and stop writing to it) ['stdout', 'stderr'].forEach(function(stream) {
process[stream].on('error', function(err) {
//Ignore errors which are raised on a closed stream.
if(process[stream].destroyed) {
return;
}
logger.error(err);
});
}); This will result in a single "EPIPE" being written to the logs. The second "EPIPE" and all following "This socket is closed" are ignored. |
I'll keep thinking about better ways to handle errors when writing to |
You're right. The only thing worse than endless recursion is endless async recursion, because it can run forever. I wrapped it in nextTick |
…0ec36518d84afe76f [Snyk] Security upgrade newrelic from 9.7.1 to 10.3.1
added create-release workflow
Release 4.1.0
swapped out using fetch for http to fix context propagation issues
I'm using New Relic on Heroku. An Express app with MongoDB and redis.
Yesterday I did a small code change to let my Express app handle dyno restarts gracefully. What I added was:
before that I didn't handle
SIGTERM
at all and node would just exit the process no matter what.Now when I restart a dyno, I get get dozens, if not hundreds of these logs entries before the dyno exits.
Note: I'm not sure if this is a New Relic issue or not. I'm posting this here since the newrelic module appears in the stack trace and I guess it has to do with newrelic trying to log the MongoDB action.
The text was updated successfully, but these errors were encountered: