Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Log stream memory heap #17

Open
shaikatzir opened this issue May 23, 2015 · 8 comments
Open

Log stream memory heap #17

shaikatzir opened this issue May 23, 2015 · 8 comments

Comments

@shaikatzir
Copy link

Sometimes my logstash server is crashing, preventing from the bunyan stream to send logs for few hours. It seems that every time it happens, the internal memory usage fills up quickly. I guessed it is caused by the bunyan, keeping the logs queued up before sending it to the server, but couldn't find any documentation of it.
Any way to configure or check the size of he queue log?

@libreninja
Copy link

I'm also running into this issue. I noticed that npm install complains about the npm version, as we upgraded to npm 2.x.

@chris-rock
Copy link
Owner

does the bunyan client crashes as well? You could try to increase the buffer size: https://github.com/chris-rock/bunyan-logstash-tcp/blob/master/lib/logstash.js#L54

@chris-rock
Copy link
Owner

@libreninja I relaxed the npm dependency in the latest master release

@shaikatzir
Copy link
Author

I am not sure if the bunyan client crashes.
The question is why the memory keeps on growing without any limit?
Is the buffer cyclic? does it deletes old messages?

@chris-rock
Copy link
Owner

we use a fixed cyclic buffer. therefore the memory should not increase due to the use of the buffer. see https://github.com/trevnorris/cbuffer

@freddi301
Copy link

Hi,
I recently experienced very slow responses from node because something was cluttering eventloop or connection.
So i disabled bunyan-logstash-tcp and now it is ok.
I think that the problem could be: if tcp stream does not connects to the logstash instance (eg: logstash server down) it somehow occupy node resources and ordinary request to node takes longer than 60sec (proxy timeout per request)

@freddi301
Copy link

Hi, sorry for mistake, it was an unrelated issue. (mongo connection pool duplication)

@konstantinkrassmann
Copy link

konstantinkrassmann commented Feb 22, 2017

We had the same issue. At 16 o'clock the elastic crashed and our clustered application wasnt available anymore.

bereich

At least, we should be able to catch that dropped logs and write them into a file or something

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants