Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adapt nginx_ingress_controller integration to ingest logs from K8s #4841

Closed
gsantoro opened this issue Dec 15, 2022 · 4 comments · Fixed by #4855
Closed

Adapt nginx_ingress_controller integration to ingest logs from K8s #4841

gsantoro opened this issue Dec 15, 2022 · 4 comments · Fixed by #4855
Assignees
Labels
Team:Cloudnative-Monitoring Label for the Cloud Native Monitoring team [elastic/obs-cloudnative-monitoring]

Comments

@gsantoro
Copy link
Contributor

The current nginx_ingress_controller integration cannot correctly ingest logs via filebeat with the default Helm based installation of nginx-ingress-controller.

This is because the default nginx.conf points to /var/log/nginx/ingress.log and /var/log/nginx/error.log respectively for access logs and error logs. Those files are by default configured to be symlinks to /dev/stdout and /dev/stderr on the nginx-ingress-controller pod so that Kubernetes can display them as output of the command kubectl logs <pod_name. Kubernetes write the pod logs under /var/log/containers or /var/lib/docker/containers depending on the container runtime (respectively containerd or docker). Files under /var/log/containers are symlinks from another location (this will be relevant later on).

There are two alternatives to fix the integration:

  1. remove those symlinks from /var/log/nginx/*.log so that nginx-ingress-controller can instead write actual files. The Helm installation will require some changes to the nginx-ingress-controller manifest so that the location /var/log/nginx is mounted as a volume of the respective node file system. This way when nginx write to those location on the pod, the file are written to the node file system and picked up by the integration running on a separate pod on the same node via elastic-agent. This solution won't require any changes to the integration but won't make possible for K8s to display the access logs and the error logs via the command kubectl logs and those logs won't end up in /var/log/containers.

  2. this solution instead requires some changes to the integration, so that everything will work with the default Helm installation of nginx-ingress-controll while at the same time not changing the default behaviour of the k8s logs. Specifically the default location for logs will need to change to /var/log/containers and since those files are symlinks, we will need to add the property symlinks: true to the filebeat configs. Also since since we want to only process the logs from nginx-ingress-controller we will need to put a condition that filter those logs based on pod labels or something similar. Furthermore we need to investigate if we can still separate access logs from error logs to split them into two separate datastream since k8s usually displays them in the same kubectl logs output.

@gsantoro gsantoro added the Team:Cloudnative-Monitoring Label for the Cloud Native Monitoring team [elastic/obs-cloudnative-monitoring] label Dec 15, 2022
@gsantoro gsantoro self-assigned this Dec 15, 2022
@gsantoro
Copy link
Contributor Author

On a side note, nginx-ingress-controller still use the old logfile input type.

There are already 2 issues related to that work

I hope that I won't need to change the input type to make this work.

@ChrsMark
Copy link
Member

+1 for solution number 2. It looks like a quick win that will fix things looking forward. If we can switch to container input too that would be great and also solve the issue with stdout/stderr since this filestream parser has the option to define the target streams. However that might require more testing so maybe as a first step would be to add the symlink setting along with the condtion and the proper dynamic path. This would be an improvement and would solve any current open cases, and will allow us to iterate faster.

@legoguy1000
Copy link
Contributor

I think this issue is present with a bunch of apps/integrations such as nginx proper, apache, haproxy, even elasticsearch and Kibana, any app that normally writes to log files but redirecta to stdout when in a container

@gsantoro
Copy link
Contributor Author

The package v1.6.0 has been created with the fix.

Missing integration tests have been moved to this new issue #4874

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Team:Cloudnative-Monitoring Label for the Cloud Native Monitoring team [elastic/obs-cloudnative-monitoring]
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants