Releases: yazgoo/fuse_kafka
0.1.6
This new release mainly contains bugfixes for 0.1.5.
Changelog
- in case of error in inotify_read, do not stop reading
- adding debug tracing via --debug flag (see README.md / Logging)
- adding log file support via --log argument (see README.md / Logging)
- adding event_queue: when output is not ready yet, up to 10000 events will be queued and get dequeued when it is ready (see README.md / Queue)
- updating logstash kafka input plugin so that default zk.connect is localhost
- re-adding rd_kafka_poll which was missing (so after a while writing to kafka stopped)
- events timestamps get more precise (from second to millisecond)
- adding --version flag (see README.md / Version) which can now contain commit id
0.1.5
This took time but it got out!
This is a major release, since there are now input/output plugins ala logstash.
I've also written an inotify plugin which compares surprisingly well with the fuse based one.
As usual, all functionalities are documentend in README.md.
There is also a slideshow (source), and a new github organisation for the project.
Although at this point the project might need a re-naming since it is now possible to run fuse_kafka without reading from fuse nor writing to kafka.
Changelog
Core:
- added dynamic configuration support
- added sleep/backup mode
- init reload gets more intelligent
- added zk list parsing to do not add more broker than necessary in librdkafka
- input plugin system
- inotify input plugin added (to compare with fuse based one)
- output plugins support
- windows support via mingw, wrote read_directory_changes plugin
- removing openssl dependency by writing our own base64 encoding implementation
- added stdout output plugin and encoders notion
Development environment:
- added logstash plugins, which can be used with tail mode (see README.md)
- added python unit tests (can be launched with tox)
- mininet tests
- shippable compatibility
- more test coverage
- add ignore rule to auditd
- added dockerfiles to generate RPMs for CentOS and zip for windows
0.1.4
Changelog:
- install.sh now supports local and remote install: it can download rpms to an archive which you can later install
- memory allocation cleanup
- adding anti-hanging (which you can setup via init script "cleanup" target)
- fixed zookeeper initialization segfaults
- added quickstart target to build.py to be able to test fast
first release
This is the first official release of fuse_kafka, a fuse-based logging agent.
It is written for collecting logs from servers running all kind of software,
as a generic way to collect logs without needing to know about each logger.
Home:
https://github.com/yazgoo/fuse_kafka
Here are some functionnalities:
- sends all writes to given directories to kafka
- passes through FS syscalls to underlying directory
- captures the pid, gid, uid, user, group, command line doing the write
- you can add metadata to identify from where the message comes from (e.g. ip-address, ...)
- you can configure kafka destination cluster either by giving a broker list or a zookeeper list
- you can specify a bandwidth quota: fuse_kafka won't send data if a file is written more than a given size per second (useful for preventing floods caused by core files dumped or log rotations in directories watched by fuse_kafka)
It is based on:
- FUSE (filesystem in userspace), to capture writes done under a given directory
- kafka (messaging queue), as the event transport system
- logstash: events are written to kafka in logstash format (except messages and commands which are stored in base64)
Quickstart:
http://playterm.org/r/fusekafka-quickstart-1416935569
Packages are provided for various distros, see installing section in README.md.
FUSE adds an overhead, so it should not be used on filesystems where high throughput is necessary.
Here are benchmarks:
http://htmlpreview.github.io/?https://raw.githubusercontent.com/yazgoo/fuse_kafka/master/benchs/benchmarks.html