-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
memory usage seems to be growing indefinitely with simple forward configuration #2926
Comments
I am currently investigating a similar issue. My setup:
I'm using the
To reproduce the issue i created a dummy input with By lowering the |
I just did two more tests: Null output with bufferUsing the Affected fluentd versionsI could reproduce this issue with each fluentd version down to |
Thanks. We will check it. |
I managed to narrow it down even further: The problem first appeared after merging #2501. Somehow the SocketCache does not seem to really cache sockets and creates a new one every few seconds. |
@aeber Can you give us the actual config to reproduce it? Besides, I'd like to ask you how long you ran fluentd? Am I missing important things or it depends on OSs? Sender(fluentd v1.10.0 and macOS(10.15.3))
Receiver(fluentd v1.10.0 and macOS(10.15.3))
|
The problem only seems to appear when using the The configs i'm using for reproducing this: Sender (docker image
Receiver (docker image
For testing this i created a local CA and server certificate with version: '2'
services:
client:
image: fluentd:v1.9-1
command:
- "-v"
volumes:
- "./client/fluent.conf:/fluentd/etc/fluent.conf"
- "./client/certs:/fluentd/certs"
server:
image: fluentd:v1.9-1
command:
- "-v"
volumes:
- "./server/fluent.conf:/fluentd/etc/fluent.conf"
- "./server/certs:/fluentd/certs" After running for about 20minutes on an Macbook i see a memory usage of around 200MB for the client container and rising. |
I have a similar issue, after updating fluentd container from 0.12.26 to 1.9.3 version under alpine flavor, memory started growing some time after the container started running and never stopped until OOM. this wasn't happening with the same configuration in old versions I can confirm @aeber comment. configuration: fluentd.conf:
|
@4ndr4s Does this happen with elasticsearch and file buffer combo? |
@repeatedly yes, this happen with elasticsearch and buffer combo configuration, not sure if the buffer configuration can influence in this behavior, the only change besides the version I did was the buffer block according with fluentd documentation, this was the previous configuration.
|
@repeatedly thinking twice, not sure if that applied to elasticsearch too, I'll enable logging and let you know, and test elasticsearch config alone. |
I can confirm, that a current build from master @ cdf75b0 no longer leaks memory with the given configuration. Currently running for 30minutes and memory usage is stable at ~70MB. Is there already a timeline for a fluentd / td-agent release covering this fix? |
@repeatedly I already remove the buffer block, and go back to the previous configuration and the behavior is the same memory start growing until OOM, this wasn't happening in 0.12.26 version with the same configuration and same incoming data.
|
@repeatedly I removed the @ type file block and keep only the elasticsearch block and stdout, the issue is still happening, memory usage continue growing until OOM.
|
@repeatedly I use v0.12 without buffer and with the follow configuration, using 3 outputs plugins.
I got OOM with all of those configurations. fluentd v0.12 configuration >
|
Can you give us the complete config, then? By the way, Can you re-create a new issue? This issue is not related to your issue. I'll close this issue later. |
I am a bit confused reading this. It looks like #2945 is confirmed to fix the issue, and yet it remains open. Is this still an issue? |
I'm not sure but I think there are 3 reports in this issue:
Although above 3 reports are similar, probably all of their causes are different.
The issue of the original report is fixed, so we should close this issue. BTW recently I investigated a similar issue with report2, this is the why I left this opened: |
Describe the bug
Memory usage is growing indefinitely with very simple @type forward configuration. I'm shipping about 10-30G of logs daily. Mostly during evening hours. td-agent at the beginning is using about 100M of RES memory but after about 12h it's 500M and half of this time it's mostly idled because there are not many logs during the night. Memory is never freed. When I switch to any 'local' output like file or stdout memory usage is very stable. I've seen the same behavior with elasticsearch output so I guess it can be something connected with network outputs ... or just my stupidity :) There are no errors in the log file. I've tried to set RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR=0.9 but it didn't fix the problem. Memory usage is still growing. I have two workers in my configuration This problem only occurs on the one with a high amount of logs generated during day hours (worker 0). A worker is reading about 50 files at once. It may be relevant that I have a lot of
pattern not matched
- I'm in the middle of standardizing log format for all apps.To Reproduce
Run with a high amount of not perfectly formate logs.
Expected behavior
Stable memory usage.
Your Environment
td-agent 1.9.2
NAME="Amazon Linux" VERSION="2" ID="amzn" ID_LIKE="centos rhel fedora" VERSION_ID="2" PRETTY_NAME="Amazon Linux 2" ANSI_COLOR="0;33" CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2" HOME_URL="https://amazonlinux.com/"
4.14.171-136.231.amzn2.x86_64
Your Configuration
Your Error Log
The text was updated successfully, but these errors were encountered: