-
Hi,
From documentation: Can anyone to explain? |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 1 reply
-
https://docs.fluentbit.io/manual/pipeline/inputs/tail. |
Beta Was this translation helpful? Give feedback.
-
Any update on this pls.? |
Beta Was this translation helpful? Give feedback.
-
I also faced headaches when considering setting these parameters; the issue is that the documentation doesn’t clearly explain the correlation between them. |
Beta Was this translation helpful? Give feedback.
-
(as of Feb 2025) it is somehow documented, not only on the inputs/tail page but also on the inputs/syslog and outputs/elasticsearch pages https://docs.fluentbit.io/manual/pipeline/inputs/syslog#configuration-parameters it needs to be increased for archive processing |
Beta Was this translation helpful? Give feedback.
-
In the meanwhile we update the docs adding for clarity around different buffers: Tail, buffers and Chunks When a file is open to be monitored, Tail plugin allocates a buffer in memory of Inside each buffer, multiple lines/records might exists, then the final buffer composed with these records processed it compose another buffer in msgpack format (binary serialization), that msgpack is appended to what we call a Chunk: multiple serialized records that belongs to the same Tag. While we have a soft-limit of chunks for 2MB, input plugins like Tail who ingest data into the pipeline might generate a msgpack buffer which is greater than 2MB, and the final chunk where this is appended can have a higher size (exceed the 2MB soft-limit). As of Fluent Bit v4.0 we don't have the functionality to enforce the Chunk size, however we already started some work on that area which we want to ship as part of v4.1 release targeted for Aug 2025. Mem_buf_limit If Fluent Bit has not been configured to use filesystem buffering, it needs some mechanisms to protect high consume of memory in case there is backpressure e.g: destination endpoints are down, network issues... Fluent Bit will retry sending these chunks, but also more data is being ingested, this is where As mentioned, this works differently when filesystem buffering is enabled. More details here: |
Beta Was this translation helpful? Give feedback.
In the meanwhile we update the docs adding for clarity around different buffers:
Tail, buffers and Chunks
When a file is open to be monitored, Tail plugin allocates a buffer in memory of
buffer_chunk_size
bytes (defaults to32kb
). If a single record (line in this is case) is longer thanbuffer_chunk_size
it won't fit so that buffer will grow up tobuffer_max_size
. We keep this value ofbuffer_chunk_size
very small since it fits most of use cases, however if your file aims to have very long lines this needs to be adjusted. Note that if you are monitoring a high number of files, a single buffer exists for each one and that might generate a higher memory usage.Inside each buffer, multiple l…