You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The doc about configuration options currently describes the bulk_max_size option for Elasticsearch output, but does not contain descriptions of this option for other output types (Logstash, console, and file). Need to add these descriptions to the documentation.
Here are some comments by @urso carried over from PR #568
So we've got multiple output plugins:
Elasticsearch
logstash
console
file
The default bulk_max_size is used for all output plugins but elasticsearch, which sets the default to 50.
This options sets maximum number of events that can be combined internally into batches and will be publishable by the output plugins =>
if beat tries to send single events, the events are collected into batches
if beat tries to publish large batch of event (bigger bulk_max_size), the batch will be split.
Bigger batch sizes can improve performance due to ammortizing per event sending overhead. On the other hand to big batch sizes can increase processing time such that queues in logstash/elasticsearch can not be processed -> APIs return errors, connections get killed or publish requests time out. This increases latency and lowers throughput for indexing events.
The text was updated successfully, but these errors were encountered:
The doc about configuration options currently describes the bulk_max_size option for Elasticsearch output, but does not contain descriptions of this option for other output types (Logstash, console, and file). Need to add these descriptions to the documentation.
Here are some comments by @urso carried over from PR #568
So we've got multiple output plugins:
The default bulk_max_size is used for all output plugins but elasticsearch, which sets the default to 50.
This options sets maximum number of events that can be combined internally into batches and will be publishable by the output plugins =>
if beat tries to send single events, the events are collected into batches
if beat tries to publish large batch of event (bigger bulk_max_size), the batch will be split.
Bigger batch sizes can improve performance due to ammortizing per event sending overhead. On the other hand to big batch sizes can increase processing time such that queues in logstash/elasticsearch can not be processed -> APIs return errors, connections get killed or publish requests time out. This increases latency and lowers throughput for indexing events.
The text was updated successfully, but these errors were encountered: