-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Elastic Log Driver: Support "docker logs" via local buffer #19371
Comments
Pinging @elastic/integrations (Team:Integrations) |
Why not follow the JSON logging driver logic 1 by 1 here? We have a go-routine per container. Each would write to it's own directory (named based on container name) into a JSON file that is rotated after 10MB. After the go-routine has written the file it would use libbeat to published the log line as an event. If we have a way to store the log file in the container directory, then docker will take care of deleting all resources for us. If we decide that we need to keep the files in a separate directory we need to make some limits configurable (e.g. delete log files after container is stopped for ). |
can we close this issue or is there anything missing? Thanks! |
This has been a long-standing issue in #13990 which we'll want to address sooner rather than later.
Right now, the docker plugin has no support fort the
docker logs
command, which is a must for cloud adoption. We need to entirely re-implement this behavior in the plugin. This behavior needs to be entirely local, as one of the primary use cases fordocker logs
is grabbing logs when upstream elasticsearch outputs are down. I've talked with @urso about this, and our best bet to implement this in a short period of time would be to generate a log file for each configured pipeline that we can spit back when the user asks for it. There's also the fs-backed buffer that @faec is working on, although it might not be ready yet. This leaves us with two remaining questions:Keep in mind we need to support the following options:
The text was updated successfully, but these errors were encountered: