-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Fluentbit tail missing some big-ish log line even with Buffer_Max_Size set to high value #1902
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Also to add - once this problematic log line appears in the log, we don't see any subsequent log lines neither in splunk - which seems to match the behaviour of not setting |
Using |
We got the same problem, is there any update? |
Displaying unparsed log entries using kubectl apply -k base; kubectl rollout status daemonset fluent-bit; kubectl logs -f -l app.kubernetes.io/name=fluent-bit Issues like fluent#88 and fluent/fluent-bit#1902 (comment) indicate that depending on /var/log/containers symlinks cause quite a few issues. /var/log/pods/ is the path stated in https://github.com/kubernetes/kubernetes/blob/v1.22.0/pkg/kubelet/kuberuntime/kuberuntime_manager.go#L63 and I've verified on GKE cos-containerd, GKE ubuntu-dockerd and k3s that the path contains the actual files, not symliks. Also using /var/log/pods makes it trivial to exclude logs from any container named fluent-bit. Doing so reduces the risk of endless log loops.
Maybe you need to comment this out: |
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days. Maintainers can add the |
This issue was closed because it has been stalled for 5 days with no activity. |
Uh oh!
There was an error while loading. Please reload this page.
Bug Report
Describe the bug
One of our app is logging to stdout and fluentbit is able to capture the stdout log from (docker) log path successfully and send onto Splunk which we can see fine.
However, we have noticed 1 type of log line is always missing from splunk, even though we can see if in the pod log.
The log line in question is a big-ish 1 line which made me look at the tuning of:
And the log line is question, has this stats:
so not particulary big in size so i assumed the Skip_Long_Lines On with Buffer_Max_Size set to 2048KB was fine.
I can't check if fluentbit is not capturing it or not sending it but would be awesome if i can find this out as i am pretty sure Splunk can handle this log line fine.
fluentbit log shows nothing but i guess thats because its not in debug which i can set to.
The text was updated successfully, but these errors were encountered: