Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lenz, our internal PaaS logs forwarder #15

Closed
wants to merge 19 commits into from
Closed

Lenz, our internal PaaS logs forwarder #15

wants to merge 19 commits into from

Conversation

CMGS
Copy link
Contributor

@CMGS CMGS commented Aug 9, 2014

Not for merge, just introduce this fork and my thoughts.

Our new internal PaaS was based on docker, as we know, docker log method was tooooo simple for managing. Firstly I try to use logspout as solution, however, there three problem I have to face.

  1. logspout connect one docker host, but we have a lot. That means I have to run logspout for each server. So the http api or websocket api will be useless and bring security problem.
  2. We use logstash for logs collection, but logspout didn't support. Although we can make logstash collect logs from syslog, but I think it brought additional overhead for each server. Also, docker log was JSON-formatted, I want to logstash decode it in native way and then send to elasticsearch for analysing. That's why I send PR#8 first.
  3. In logspout, one route container one backend, that means if I have multiple logstash in backend, I have to made a HA before those logstash indexer for load balancing. In this way, find or create a HA support multiple protocols is not easy.

For those reasons, I forked logstash and modify a lot, I gave a new name to it, because in our team we follow a rule to name a project.

Firstly, I remove http and websocket interface. Implement lenz to catch HUP signal for reloading route files. Then write a systemctl script to run lenz for each server.

Secondly, I wrote a new streamer to send JSON-formatted events to logstash by udp protocol. However it not enough. I changed route file syntax, remove target type, detective protocol by backends url. In this version, it can send JSON-formatted log to backend by tcp or udp protocol and the normal log to syslog. You can specify multiple backends with different protocols in route file or command line.

In the end, after I made route support multiple backends, I use consistent hash to manage remotes. When events coming, the router will get one backend by docker container name then send events to it If the backend available. In the same time, failover was supported too.

That's all, thank progrium for this convenient project and inspiring me.

@progrium
Copy link
Contributor

Thanks for this! This is really great but it's too much for one PR. You have some good ideas, but it would be nice to get them in one at a time so we can pick what is generally useful. If you don't want to spend the time, I understand. Glad my work was helpful!

@progrium progrium closed this Feb 13, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants