Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lenz, our internal PaaS logs forwarder #15

Closed
wants to merge 19 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,3 @@
build
logspout
lenz
14 changes: 4 additions & 10 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,13 +1,7 @@
FROM flynn/busybox
MAINTAINER Jeff Lindsay <progrium@gmail.com>
MAINTAINER CMGS <ilskdw@gmail.com>

ADD ./build/logspout /bin/logspout
ADD ./lenz /bin/lenz

ENV DOCKER unix:///tmp/docker.sock
ENV ROUTESPATH /mnt/routes
VOLUME /mnt/routes

EXPOSE 8000

ENTRYPOINT ["/bin/logspout"]
CMD []
ENTRYPOINT ["/bin/lenz"]
CMD []
14 changes: 7 additions & 7 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
build/container: build/logspout Dockerfile
docker build --no-cache -t logspout .
build/container: build/lenz Dockerfile
docker build --no-cache -t lenz .
touch build/container

build/logspout: *.go
go build -o build/logspout
build/lenz: *.go
go build -o build/lenz

release:
docker tag logspout progrium/logspout
docker push progrium/logspout
docker tag lenz CMGS/lenz
docker push CMGS/lenz

.PHONY: clean
clean:
rm -rf build
rm -rf build
139 changes: 25 additions & 114 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,146 +1,57 @@
# logspout

A log router for Docker container output that runs entirely inside Docker. It attaches to all containers on a host, then routes their logs wherever you want.

It's a 100% stateless log appliance (unless you persist routes). It's not meant for managing log files or looking at history. It is just a means to get your logs out to live somewhere else, where they belong.

For now it only captures stdout and stderr, but soon Docker will let us hook into more ... perhaps getting everything from every container's /dev/log.

## Getting logspout

Logspout is a very small Docker container (14MB virtual, based on busybox), so you can just pull it from the index:

$ docker pull progrium/logspout

## Using logspout

#### Route all container output to remote syslog

The simplest way to use logspout is to just take all logs and ship to a remote syslog. Just pass a default syslog target URI as the command. Also, we always mount the Docker Unix socket with `-v` to `/tmp/docker.sock`:
# lenz

$ docker run -v=/var/run/docker.sock:/tmp/docker.sock progrium/logspout syslog://logs.papertrailapp.com:55555
Fork from logspout by progrium, modified for sending JSON-formatted data to backends. It removed http api interface and changed route file syntax.

Logs will be tagged with the container name. The hostname will be the hostname of the logspout container, so you probably want to set the container hostname to the actual hostname by adding `-h $HOSTNAME`.
I made lenz support multiple mixed protocols backend. When events coming, it will choice one and send the event. Here I use consistent hash for scaling and failover.

#### Inspect log streams using curl
In the end, I implement the reload route files method by HUP signal. It would help you dynamic forward events easily.

Whether or not you run it with a default routing target, if you publish its port 8000, you can connect with curl to see your local aggregated logs in realtime.

$ docker run -d -p 8000:8000 \
-v=/var/run/docker.sock:/tmp/docker.sock \
progrium/logspout
$ curl $(docker port `docker ps -lq` 8000)/logs

You should see a nicely colored stream of all your container logs. You can filter by container name, log type, and more. You can also get JSON objects, or you can upgrade to WebSocket and get JSON logs in your browser.

See [Streaming Endpoints](#streaming-endpoints) for all options.

#### Create custom routes via HTTP

Along with streaming endpoints, logspout also exposes a `/routes` resource to create and manage routes.

$ curl $(docker port `docker ps -lq` 8000)/logs -X POST \
-d '{"source": {"filter": "db", "types": ["stderr"]}, target": {"type": "syslog", "addr": "logs.papertrailapp.com:55555"}}'

That example creates a new syslog route to [Papertrail](https://papertrailapp.com) of only `stderr` for containers with `db` in their name.

By default, routes are ephemeral. But if you mount a volume to `/mnt/routes`, they will be persisted to disk.

See [Routes Resource](#routes-resource) for all options.

## HTTP API
# logspout

### Streaming Endpoints
A log router for Docker container output that runs entirely inside Docker. It attaches to all containers on a host, then routes their logs to wherever you want.

You can use these chunked transfer streaming endpoints for quick debugging with `curl` or for setting up easy TCP subscriptions to log sources. They also support WebSocket upgrades.
It's a 100% stateless log appliance (unless you persist routes). It's not meant for managing log files or looking at history. It is just a means to get your logs out to live somewhere else, where they belong.

GET /logs
GET /logs/filter:<container-name-substring>
GET /logs/id:<container-id>
GET /logs/name:<container-name>
For now it only captures stdout and stderr, but soon Docker will let us hook into more ... perhaps getting everything from every container's /dev/log.

You can select specific log types from a source using a comma-delimited list in the query param `types`. Right now the only types are `stdout` and `stderr`, but when Docker properly takes over each container's syslog socket (or however they end up doing it), other types will be possible.
#### Route all container output to remote backends

If you include a request `Accept: application/json` header, the output will be JSON objects including the name and ID of the container and the log type. Note that when upgrading to WebSocket, it will always use JSON.
The simplest way to use lenz is to just take all logs and ship to remotes. Just pass default target URIs as the command.

Since `/logs` and `/logs/filter:<string>` endpoints can return logs from multiple source, they will by default return color-coded loglines prefixed with the name of the container. You can turn off the color escape codes with query param `colors=off` or the alternative is to stream the data in JSON format, which won't use colors or prefixes.
$ ./lenz -forwards=udp://zzzz:50433,udp://yyy:50433

Logs will be tagged with the container name. And the appname will be tagged with the first world of the container name.

### Routes Resource

Routes let you configure logspout to hand-off logs to another system. Right now the only supported target type is via UDP `syslog`, but hey that's pretty much everything.
Routes let you configure lenz to hand-off logs to another system.

#### Creating a route

POST /routes

Takes a JSON object like this:
Saving a JSON object in a file like this:

{
"source": {
"filter": "_db"
"filter": "test"
"types": ["stdout"]
},
"target": {
"type": "syslog",
"addr": "logaggregator.service.consul"
"append_tag": ".db"
}
}

The `source` field should be an object with `filter`, `name`, or `id` fields. You can specify specific log types with the `types` field to collect only `stdout` or `stderr`. If you don't specify `types`, it will route all types.

To route all logs of all types on all containers, don't specify a `source`.

The `append_tag` field of `target` is optional and specific to `syslog`. It lets you append to the tag of syslog packets for this route. By default the tag is `<container-name>`, so an `append_tag` value of `.app` would make the tag `<container-name>.app`.

And yes, you can just specify an IP and port for `addr`, but you can also specify a name that resolves via DNS to one or more SRV records. That means this works great with [Consul](http://www.consul.io/) for service discovery.

#### Listing routes

GET /routes

Returns a JSON list of current routes:

[
{
"id": "3631c027fb1b",
"source": {
"name": "mycontainer"
},
"target": {
"type": "syslog",
"addr": "192.168.1.111:514"
}
}
]

#### Viewing a route

GET /routes/<id>

Returns a JSON route object:

{
"id": "3631c027fb1b",
"source": {
"id": "a9efd0aeb470"
"types": ["stderr"]
},
"target": {
"type": "syslog",
"addr": "192.168.1.111:514"
"addr": [
"udp://logstash1:50433",
"udp://logstash2:50433",
],
"append_tag": ".test"
}
}

#### Deleting a route
The `source` field should be an object with `filter`, `name`, or `id` fields. You can specify specific log types with the `types` field to collect only `stdout` or `stderr`. If you don't specify `types`, it will route all types. If you specified `filter`, it would filter events by container name.

DELETE /routes/<id>
To route all logs of all types on all containers, don't specify a `filter`.

## Sponsor
The `append_tag` field of `target` is optional and specific to `logstash`. It lets you append to the tag of events for this route. By default the tag is empty, so an `append_tag` value of `test` would make the tag `test`.

This project was made possible by [DigitalOcean](http://digitalocean.com).
And yes, you can just specify an IP and port for `addr`, but you can also specify a name that resolves via DNS to one or more SRV records.

## License

BSD
BSD
1 change: 0 additions & 1 deletion SPONSORS

This file was deleted.

2 changes: 1 addition & 1 deletion attacher.go
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ import (
"strings"
"sync"

"github.com/fsouza/go-dockerclient"
"github.com/CMGS/go-dockerclient"
)

type AttachManager struct {
Expand Down
Loading