-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Treatment of single-line logs #3852
Comments
I'm thinking if this should potentially be a different prospector type. In your case above it seems to be always one line, but it could also be described as: Send full file as one event. This is not really a typical log use case. I also assume with these kind of files there is much less complexity. Means they don't have to be monitored for rotation or updates etc. They are read once and that's it? |
In my case the logfile only consists of one json dictionary and each file is read once and deleted after being fully processed (offset==filesize). I currently explicitly disable monitoring of those files by setting the close_eof option of the prospector, so your assumption is absolutley right. |
@ruflin is there a way to force this with a timeout just as we do for Logstash? So read everything and when the timeout happens ship what we have already? That would resolve this issue correct? Basically to ship single line logs we might not need a new line, but just a timeout? |
@gmoskovicz Our multiline supports a timeout. I'm wondering if we could use this one. But I would prefer to have a clean solution. |
The multiline works when there is a new line at the end correct? Or, you don't need multiple lines? My understanding is that the multiline process executes when the new line is there. |
@gmoskovicz Yes, correct. Ignore my previous comment. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue doesn't have a |
a fix has been added here #33568 and applied to the Filebeat aws-s3 input only, we should expand this to the file input and rest as well. |
Hi! We're labeling this issue as |
Lines that are not terminated with a linebreak are not recognized by the filebeat agent (offset stays 0). This causes trouble, especially when an application writes single-line logfiles that only hold 1 entry.
filebeat-version: filebeat-5.1.1-windows-x86_64
Discussion regarding this issue
It would be nice to have a configuration parameter which enables the processing of logfiles with 1 entry.
The text was updated successfully, but these errors were encountered: