Replies: 1 comment
-
Thanks bro, your article will help me I surpose |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi. I am still on my Java and Quarkus learning curve and wanted to share something that I think could be included in this guide:
https://quarkus.io/guides/centralized-log-management
I just got my Quarkus container logs to appear in Grafana using a very light-weight push agent - a docker plugin!
For now I am using the free Grafana Cloud option but of course down the track running Grafana in a container on my VM would be the go.
Basically the strategy is to use the
https://grafana.com/docs/loki/latest/send-data/docker-driver/
docker plugin to send the docker logs to Loki, which is the datasource I configured in Grafana. You can define your logs however you want (eg JSON, not JSON etc) in yourapplication.properties
as per the Quarkus tutorial. The actual format doesn't seem to matter to Loki.I found this video was helpful, even though it uses
promtail
(which I ended up not using because I couldn't make it work properly - most likely user-error!). Also, the docker plugin is much more lightweight.A sample docker run for the Getting Started Quarkus app to configure
Loki
to be the log destination in docker is:When that container runs, if your /hello endpoint logs stuff like:
those log entries (and all the other docker log output) ends up in Grafana. You can query your logs as long as you use the correct "job" key in the Grafana Query:
ie
{job="docker"}
in this case because that is the label we set via--log-opt loki-external-labels=job=docker
above. You can specify multiple label key:value pairs and it doesn't need to be "job" - keys can be anything.You can also add a json config file to docker to make the Loki plugin the default logger for all containers on the host machine. That way you do not need to add anything to the individual containers. That might work for some use-cases, but the downside is every log message gets the same
label
property (or properties) so you lose the ability to easily know which container generated the log entry.Docs here: https://grafana.com/docs/loki/latest/send-data/docker-driver/
One gotcha: do NOT use
:latest
when pulling the docker-driver. Use the version specified in the docs because the "latest" is actually many versions old!Anyway, so far it works really well and I suggest it could be considered as another option for the centralised logging example in Quarkus.
Of course, at this stage I haven't used it in production so I am assuming it will work well there too! At the moment I am successfully sending logs from my dev containers for Keycloak, Postgres, Redis and my own Quarkus apps. I will be adding the logs from my Nginx container when I move this to production.
So, apart from sharing, my question is whether more experienced heads can see a problem with this strategy.
EDIT:
For a
docker-compose
you would add something like this to the service. eg for Redis:where
GRAFANA_LOKI_ENDPOINT
is set in your .env / secrets file.In Grafana your Query would be
{containername="my-redis"}
If you use
docker-java
you can configure the logging like this:EDIT:
I have now deployed this to my production environment and can confirm it works really well.
I am getting all the docker log data from these containers: Nginx, Keycloak, Postgres, Redis and my own Quarkus apps.
I also created a secured REST endpoint in one of my Quarkus apps so I can send error logs and use-data from my Vue / Quasar front end app. When the Quarkus app gets the log message it simply logs it to the console using eg
Log.error()
orLog.info()
and that is sufficient to get the front end app logs into Grafana and into a dashboard. All very cool.Cheers,
Murray
Beta Was this translation helpful? Give feedback.
All reactions