Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix problem with AMQP connection not closed if exchange not exists #114

Closed
wants to merge 26 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
1237eaf
blacklist regex matcher now having same behavior as streams message m…
Apr 19, 2011
cbfe7bd
- added new matcher class which matches the pattern to the full messa…
dennisoelkers Aug 9, 2011
5689408
Revert "- added new matcher class which matches the pattern to the fu…
dennisoelkers Aug 9, 2011
fa4ac7b
Merge branch 'develop'
Dec 23, 2011
4389985
allowe to override date of syslog messages to NOW
Dec 24, 2011
23e92d9
Several retries for ES index check on startup
Dec 31, 2011
8003087
bumped version to 0.9.6p1
Dec 31, 2011
9bace34
made getMessageCountsColl() synchronized #SERVER-102
Mar 1, 2012
ab05200
make message counter hashmaps concurrent
Apr 11, 2012
a196f4d
travis configuration file
Apr 16, 2012
788807b
let's try this: before_install script for travis to install syslog4j …
Apr 16, 2012
7a46a6d
travis-ci before_install script path has to be local
Apr 16, 2012
f7586d4
bumped version to 0.9.6p1-RC2
May 8, 2012
a22f9df
Updated syslog install script to include mvn path
mhart Jun 18, 2012
c796649
Eliminate nasty race condition when handling chunked GELF messages
mhart Jun 18, 2012
5aaf2b9
Merge pull request #84 from mhart/update-syslog-install-script
Jun 18, 2012
ed502ae
Merge pull request #83 from mhart/fix-gelf-race-condition
Jun 18, 2012
a573c5a
bumped version to 0.9.6p1
Jun 25, 2012
5bce534
Update the before_install script for travis to install the syslog jar…
realityforge Jul 5, 2012
a71f06c
Merge pull request #85 from realityforge/master
kroepke Jul 5, 2012
d50db0d
added implementation of commons-daemon Daemon interface and modified …
kbrockhoff Jan 15, 2013
19700e8
added init.d script and rpm spec that uses commons-daemon
kbrockhoff Jan 15, 2013
4cc9787
Merge branch 'develop'
Feb 14, 2013
6c56f30
Merge branch 'develop' of https://github.com/Graylog2/graylog2-server…
kbrockhoff Feb 20, 2013
92dac4b
Merge branch 'master' of https://github.com/Graylog2/graylog2-server …
kbrockhoff Feb 20, 2013
9e3060d
Added handling for AlreadyColsedException and fixed executor cleanup
kbrockhoff Feb 21, 2013
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@ NAME=graylog2-server
PREFIX=/usr
DESTDIR=

SERVER_W_DEP=target/graylog2-server-0.9.6-jar-with-dependencies.jar
SERVER=target/graylog2-server-0.9.6.jar
SERVER_W_DEP=target/graylog2-server-0.9.6p1-RC2-jar-with-dependencies.jar
SERVER=target/graylog2-server-0.9.6p1-RC2.jar
SYSLOG4J=lib/syslog4j-0.9.46-bin.jar
INITD=contrib/distro/generic/graylog2-server.init.d
CONF=misc/graylog2.conf
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
# this must be the same as for your elasticsearch cluster
cluster.name: graylog2

# you could also leave this out, but makes it easier to identify the graylog2 client instance
node.name: "graylog2-server"

# we don't want the graylog2 client to store any data, or be master node
node.master: false
node.data: false

# you might need to bind to a certain IP address, do that here
#network.host: 172.24.0.14
# use a different port if you run multiple elasticsearch nodes on one machine
#transport.tcp.port: 9350

# we don't need to run the embedded HTTP server here
http.enabled: false

# adapt these for discovery to work in your network! multicast can be tricky
#discovery.zen.ping.multicast.address: 172.24.0.14
#discovery.zen.ping.multicast.group: 224.0.0.1


################################## Discovery ##################################

# Discovery infrastructure ensures nodes can be found within a cluster
# and master node is elected. Multicast discovery is the default.

# Set to ensure a node sees N other master eligible nodes to be considered
# operational within the cluster. Set this option to a higher value (2-4)
# for large clusters (>3 nodes):
#
# discovery.zen.minimum_master_nodes: 1

# Set the time to wait for ping responses from other nodes when discovering.
# Set this option to a higher value on a slow or congested network
# to minimize discovery failures:
#
# discovery.zen.ping.timeout: 3s

# See <http://elasticsearch.org/guide/reference/modules/discovery/zen.html>
# for more information.

# Unicast discovery allows to explicitly control which nodes will be used
# to discover the cluster. It can be used when multicast is not present,
# or to restrict the cluster communication-wise.
#
# 1. Disable multicast discovery (enabled by default):
#
# discovery.zen.ping.multicast.enabled: false
#
# 2. Configure an initial list of master nodes in the cluster
# to perform discovery when new nodes (master or data) are started:
#
# discovery.zen.ping.unicast.hosts: ["host1", "host2:port", "host3[portX-portY]"]

# EC2 discovery allows to use AWS EC2 API in order to perform discovery.
#
# You have to install the cloud-aws plugin for enabling the EC2 discovery.
#
# See <http://elasticsearch.org/guide/reference/modules/discovery/ec2.html>
# for more information.
#
# See <http://elasticsearch.org/tutorials/2011/08/22/elasticsearch-on-ec2.html>
# for a step-by-step tutorial.
24 changes: 24 additions & 0 deletions contrib/distro/commons-daemon-redhat/SOURCES/graylog2-log4j.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE log4j:configuration PUBLIC "-//APACHE//DTD LOG4J 1.2//EN" "log4j.dtd">
<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/">

<!-- Appenders -->
<appender name="console" class="org.apache.log4j.ConsoleAppender">
<param name="Target" value="System.out" />
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern" value="%d %-5p: %c - %m%n" />
</layout>
</appender>

<!-- Application Loggers -->
<logger name="org.graylog2">
<level value="warn" />
</logger>

<!-- Root Logger -->
<root>
<priority value="warn" />
<appender-ref ref="console" />
</root>

</log4j:configuration>
167 changes: 167 additions & 0 deletions contrib/distro/commons-daemon-redhat/SOURCES/graylog2.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,167 @@
# If you are running more than one instances of graylog2-server you have to select one of these
# instances as master. The master will perform some periodical tasks that non-masters won't perform.
is_master = true

# Set plugin directory here (relative or absolute)
plugin_dir = plugin

# On which port (UDP) should we listen for Syslog messages? (Standard: 514)
syslog_listen_port = 514
syslog_listen_address = 0.0.0.0
syslog_enable_udp = true
syslog_enable_tcp = false
# Standard delimiter is LF. You can force using a NUL byte delimiter using this option.
syslog_use_nul_delimiter = false
# The raw syslog message is stored as full_message of the message if not disabled here.
syslog_store_full_message = true

# Socket receive buffer size (bytes) for UDP syslog and UDP GELF.
udp_recvbuffer_sizes = 1048576

# Embedded elasticsearch configuration file
# pay attention to the working directory of the server, maybe use an absolute path here
elasticsearch_config_file = /etc/graylog2-server/elasticsearch.yml
elasticsearch_max_docs_per_index = 20000000

elasticsearch_index_prefix = graylog2

# How many indices do you want to keep? If the number of indices exceeds this number, older indices will be dropped.
# elasticsearch_max_number_of_indices*elasticsearch_max_docs_per_index=total number of messages in your setup
elasticsearch_max_number_of_indices = 20

# How many ElasticSearch shards and replicas should be used per index? Note that this only applies to newly created indices.
elasticsearch_shards = 4
elasticsearch_replicas = 0

# Analyzer (tokenizer) to use for message and full_message field. The "standard" filter usually is a good idea.
# All supported analyzers are: standard, simple, whitespace, stop, keyword, pattern, language, snowball, custom
# ElasticSearch documentation: http://www.elasticsearch.org/guide/reference/index-modules/analysis/
# Note that this setting only takes effect on newly created indices.
elasticsearch_analyzer = standard

# How many minutes of messages do you want to keep in the recent index? This index lives in memory only and is used to build the overview and stream pages. Raise this value if you want to see more messages in the overview pages. This is not affecting for example searches which are always targeting *all* indices.
recent_index_ttl_minutes = 60

# Storage type of recent index. Allowed values: niofs, simplefs, mmapfs, memory
# Standard: niofs - Set to memory for best speed but keep in mind that the whole recent index has to fit into the memory of your ElasticSearch machines. Set recent_index_ttl_minutes to a reasonable amount that will let the messages fit into memory.
recent_index_store_type = niofs

# Always try a reverse DNS lookup instead of parsing hostname from syslog message?
force_syslog_rdns = false
# Set time to NOW if parsing date/time from syslog message failed instead of rejecting it?
allow_override_syslog_date = true

# Batch size for all outputs. This is the maximum (!) number of messages an output module will get at once.
# For example, if this is set to 5000 (default), the ElasticSearch Output will not index more than 5000 messages
# at once. After that index operation is performed, the next batch will be indexed. If there is only 1 message
# waiting, it will only index that single message. It is important to raise this parameter if you send in so
# many messages that it is not enough to index 5000 messages at once. (Only at *really* high message rates)
output_batch_size = 5000

# The number of parallel running processors.
# Raise this number if your buffers are filling up.
processbuffer_processors = 5
outputbuffer_processors = 5

# Wait strategy describing how buffer processors wait on a cursor sequence. (default: sleeping)
# Possible types:
# - yielding
# Compromise between performance and CPU usage.
# - sleeping
# Compromise between performance and CPU usage. Latency spikes can occur after quiet periods.
# - blocking
# High throughput, low latency, higher CPU usage.
# - busy_spinning
# Avoids syscalls which could introduce latency jitter. Best when threads can be bound to specific CPU cores.
processor_wait_strategy = sleeping

# Size of internal ring buffers. Raise this if raising outputbuffer_processors does not help anymore.
# For optimum performance your LogMessage objects in the ring buffer should fit in your CPU L3 cache.
# Start server with --statistics flag to see buffer utilization.
# Must be a power of 2. (512, 1024, 2048, ...)
ring_size = 1024

# MongoDB Configuration
mongodb_useauth = true
mongodb_user = grayloguser
mongodb_password = 123
mongodb_host = 127.0.0.1
#mongodb_replica_set = localhost:27017,localhost:27018,localhost:27019
mongodb_database = graylog2
mongodb_port = 27017

# Raise this according to the maximum connections your MongoDB server can handle if you encounter MongoDB connection problems.
mongodb_max_connections = 100

# Number of threads allowed to be blocked by MongoDB connections multiplier. Default: 5
# If mongodb_max_connections is 100, and mongodb_threads_allowed_to_block_multiplier is 5, then 500 threads can block. More than that and an exception will be thrown.
# http://api.mongodb.org/java/current/com/mongodb/MongoOptions.html#threadsAllowedToBlockForConnectionMultiplier
mongodb_threads_allowed_to_block_multiplier = 5

# Graylog Extended Log Format (GELF)
use_gelf = true
gelf_listen_address = 0.0.0.0
gelf_listen_port = 12201

# Drools Rule File (Use to rewrite incoming log messages)
# rules_file = /etc/graylog2.d/rules/graylog2.drl

# AMQP
amqp_enabled = false
amqp_host = localhost
amqp_port = 5672
amqp_username = guest
amqp_password = guest
amqp_virtualhost = /

# HTTP input
# the server will accept PUT requests to /gelf or /gelf/raw
# /gelf can process all standard GELF messages containing the two header bytes
# /gelf/raw can only process uncompressed GELF messages without any header bytes.
# the HTTP server allows keep-alive connections and supports compression.
http_enabled = false
http_listen_address = 0.0.0.0
http_listen_port = 12202

# Email transport
transport_email_enabled = false
transport_email_hostname = mail.example.com
transport_email_port = 587
transport_email_use_auth = true
transport_email_use_tls = true
transport_email_auth_username = you@example.com
transport_email_auth_password = secret
transport_email_subject_prefix = [graylog2]
transport_email_from_email = graylog2@example.com
transport_email_from_name = Graylog2

# Jabber/XMPP transport
transport_jabber_enabled = false
transport_jabber_hostname = jabber.example.com
transport_jabber_port = 5222
transport_jabber_use_sasl_auth = true
transport_jabber_allow_selfsigned_certs = false
transport_jabber_auth_username = your_user
transport_jabber_auth_password = secret
transport_jabber_message_prefix = [graylog2]

# Filters
# Enable the filter that tries to extract additional fields from k=v values in the log message?
enable_tokenizer_filter = true

# Additional modules
# Graphite
#enable_graphite_output = false
#graphite_carbon_host = 127.0.0.1
#graphite_carbon_tcp_port = 2003
#graphite_prefix = logs

# Librato Metrics (http://support.torch.sh/help/kb/graylog2-server/using-librato-metrics-with-graylog2)
#enable_libratometrics_output = false
#enable_libratometrics_system_metrics = false
#libratometrics_api_user = you@example.com
#libratometrics_api_token = abcdefg12345
#libratometrics_prefix = gl2-
#libratometrics_interval = 60
#libratometrics_stream_filter =
#libratometrics_host_filter =
26 changes: 26 additions & 0 deletions contrib/distro/commons-daemon-redhat/SOURCES/graylog2.drl
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Example Drools file.
#import org.graylog2.messagehandlers.gelf.GELFMessage
#
#rule "Overwrite localhost host"
# when
# m : GELFMessage( host == "localhost" && version == "1.0" )
# then
# m.setHost( "localhost.example.com" );
# System.out.println( "[Overwrite localhost rule fired] : " + m.toString() );
#end
#
#rule "Drop all messages from www1 facility"
# when
# m : GELFMessage( host == "www1" && facility == "graylog2-test" )
# then
# m.setFilterOut(true);
# System.out.println( "[Drop all messages from www1 facility rule fired] : " + m.toString() );
#end
#
#rule "Drop UDP and ICMP Traffic from firewall"
# when
# m : GELFMessage( fullMessage matches "(?i).*(ICMP|UDP) Packet(.|\n|\r)*" )
# then
# m.setFilterOut(true);
# System.out.println("[Drop all syslog ICMP and UDP traffic] : " + m.toString() );
#end
Loading