Skip to content

Latest commit

 

History

History
1145 lines (772 loc) · 39.4 KB

outputconfig.asciidoc

File metadata and controls

1145 lines (772 loc) · 39.4 KB

Configure the output

You configure {beatname_uc} to write to a specific output by setting options in the output section of the {beatname_lc}.yml config file. Only a single output may be defined.

The following topics describe how to configure each supported output:

Configure the Elasticsearch output

Elasticsearch

When you specify Elasticsearch for the output, the Beat sends the transactions directly to Elasticsearch by using the Elasticsearch HTTP API.

Example configuration:

output.elasticsearch:
  hosts: ["http://localhost:9200"]
  index: "{beatname_lc}"
  ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
  ssl.certificate: "/etc/pki/client/cert.pem"
  ssl.key: "/etc/pki/client/cert.key"

To enable SSL, just add https to all URLs defined under hosts.

output.elasticsearch:
  hosts: ["https://localhost:9200"]
  username: "admin"
  password: "s3cr3t"

If the Elasticsearch nodes are defined by IP:PORT, then add protocol: https to the yaml file.

output.elasticsearch:
  hosts: ["localhost"]
  protocol: "https"
  username: "admin"
  password: "s3cr3t"

Compatibility

This output works with all compatible versions of Elasticsearch. See "Supported Beats Versions" in the Elastic Support Matrix.

Configuration options

You can specify the following options in the elasticsearch section of the {beatname_lc}.yml config file:

enabled

The enabled config is a boolean setting to enable or disable the output. If set to false, the output is disabled.

The default value is true.

hosts

The list of Elasticsearch nodes to connect to. The events are distributed to these nodes in round robin order. If one node becomes unreachable, the event is automatically sent to another node. Each Elasticsearch node can be defined as a URL or IP:PORT. For example: http://192.15.3.2, https://es.found.io:9230 or 192.24.3.2:9300. If no port is specified, 9200 is used.

Note
When a node is defined as an IP:PORT, the scheme and path are taken from the protocol and path config options.
output.elasticsearch:
  hosts: ["10.45.3.2:9220", "10.45.3.1:9230"]
  protocol: https
  path: /elasticsearch

In the previous example, the Elasticsearch nodes are available at https://10.45.3.2:9220/elasticsearch and https://10.45.3.1:9230/elasticsearch.

compression_level

The gzip compression level. Setting this value to 0 disables compression. The compression level must be in the range of 1 (best speed) to 9 (best compression).

Increasing the compression level will reduce the network usage but will increase the cpu usage.

The default value is 0.

worker

The number of workers per configured host publishing events to Elasticsearch. This is best used with load balancing mode enabled. Example: If you have 2 hosts and 3 workers, in total 6 workers are started (3 for each host).

username

The basic authentication username for connecting to Elasticsearch.

password

The basic authentication password for connecting to Elasticsearch.

parameters

Dictionary of HTTP parameters to pass within the url with index operations.

protocol

The name of the protocol Elasticsearch is reachable on. The options are: http or https. The default is http. However, if you specify a URL for hosts, the value of protocol is overridden by whatever scheme you specify in the URL.

path

An HTTP path prefix that is prepended to the HTTP API calls. This is useful for the cases where Elasticsearch listens behind an HTTP reverse proxy that exports the API under a custom prefix.

headers

Custom HTTP headers to add to each request created by the Elasticsearch output. Example:

output.elasticsearch.headers:
  X-My-Header: Header contents

It is generally possible to specify multiple header values for the same header name by separating them with a comma.

proxy_url

The URL of the proxy to use when connecting to the Elasticsearch servers. The value may be either a complete URL or a "host[:port]", in which case the "http" scheme is assumed. If a value is not specified through the configuration file then proxy environment variables are used. See the golang documentation for more information about the environment variables.

index

The index name to write events to. The default is "{beatname_lc}-%{+yyyy.MM.dd}" (for example, "{beatname_lc}-2015.04.26").

indices

Array of index selector rules supporting conditionals, format string based field access and name mappings. The first rule matching will be used to set the index for the event to be published. If indices is missing or no rule matches, the index field will be used.

Rule settings:

index: The index format string to use. If the fields used are missing, the rule fails.

mapping: Dictionary mapping index names to new names

default: Default string value if mapping does not find a match.

when: Condition which must succeed in order to execute the current rule.

Examples elasticsearch output with indices:

output.elasticsearch:
  hosts: ["http://localhost:9200"]
  index: "logs-%{+yyyy.MM.dd}"
  indices:
    - index: "critical-%{+yyyy.MM.dd}"
      when.contains:
        message: "CRITICAL"
    - index: "error-%{+yyyy.MM.dd}"
      when.contains:
        message: "ERR"
pipeline

A format string value that specifies the ingest node pipeline to write events to.

output.elasticsearch:
  hosts: ["http://localhost:9200"]
  pipeline: my_pipeline_id

For more information, see [configuring-ingest-node].

pipelines

Similar to the indices array, this is an array of pipeline selector configurations supporting conditionals, format string based field access and name mappings. The first rule matching will be used to set the pipeline for the event to be published. If pipelines is missing or no rule matches, the pipeline field will be used.

Example elasticsearch output with pipelines:

filebeat.prospectors:
- paths: ["/var/log/app/normal/*.log"]
  fields:
    type: "normal"
- paths: ["/var/log/app/critical/*.log"]
  fields:
    type: "critical"

output.elasticsearch:
  hosts: ["http://localhost:9200"]
  index: "filebeat-%{+yyyy.MM.dd}"
  pipelines:
    - pipeline: critical_pipeline
      when.equals:
        fields.type: "critical"
    - pipeline: normal_pipeline
      when.equals:
        fields.type: "normal"
max_retries

The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped. Some Beats, such as Filebeat, ignore the max_retries setting and retry until all events are published.

Set max_retries to a value less than 0 to retry until all events are published.

The default is 3.

bulk_max_size

The maximum number of events to bulk in a single Elasticsearch bulk API index request. The default is 50.

If the Beat sends single events, the events are collected into batches. If the Beat publishes a large batch of events (larger than the value specified by bulk_max_size), the batch is split.

Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in API errors, killed connections, timed-out publishing requests, and, ultimately, lower throughput.

Setting bulk_max_size to values less than or equal to 0 disables the splitting of batches. When splitting is disabled, the queue decides on the number of events to be contained in a batch.

timeout

The http request timeout in seconds for the Elasticsearch request. The default is 90.

ssl

Configuration options for SSL parameters like the certificate authority to use for HTTPS-based connections. If the ssl section is missing, the host CAs are used for HTTPS connections to Elasticsearch.

See [configuration-ssl] for more information.

Configure the Logstash output

Logstash

The Logstash output sends events directly to Logstash by using the lumberjack protocol, which runs over TCP. Logstash allows for additional processing and routing of generated events.

Accessing metadata fields

Every event sent to Logstash contains the following metadata fields that you can use in Logstash for indexing and filtering:

{
    ...
    "@metadata": { <1>
      "beat": "{beatname_lc}", <2>
      "version": "{stack-version}" <3>
      "type": "doc" <4>
    }
}
  1. {beatname_uc} uses the @metadata field to send metadata to Logstash. See the {logstashdoc}/event-dependent-configuration.html#metadata[Logstash documentation] for more about the @metadata field.

  2. The default is {beatname_lc}. To change this value, set the index option in the {beatname_uc} config file.

  3. The beats current version.

  4. The value of type is currently hardcoded to doc. It was used by previous Logstash configs to set the type of the document in Elasticsearch.

Warning
The @metadata.type field, added by the Logstash output, is deprecated, hardcoded to doc, and will be removed in {beatname_uc} 7.0.

You can access this metadata from within the Logstash config file to set values dynamically based on the contents of the metadata.

For example, the following Logstash configuration file for versions 2.x and 5.x sets Logstash to use the index and document type reported by Beats for indexing events into Elasticsearch:

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" (1)
  }
}
  1. %{[@metadata][beat]} sets the first part of the index name to the value of the beat metadata field, %{[@metadata][version]} sets the second part to the beat’s version, and `%{YYYY.MM.dd}` sets the third part of the name to a date based on the Logstash `@timestamp` field. For example: +{beatname_lc}-{stack-version}-2017.03.29.

Events indexed into Elasticsearch with the Logstash configuration shown here will be similar to events directly indexed by Beats into Elasticsearch.

Compatibility

This output works with all compatible versions of Logstash. See "Supported Beats Versions" in the Elastic Support Matrix.

Configuration options

You can specify the following options in the logstash section of the {beatname_lc}.yml config file:

enabled

The enabled config is a boolean setting to enable or disable the output. If set to false, the output is disabled.

The default value is true.

hosts

The list of known Logstash servers to connect to. If load balancing is disabled, but multiple hosts are configured, one host is selected randomly (there is no precedence). If one host becomes unreachable, another one is selected randomly.

All entries in this list can contain a port number. If no port number is given, the value specified for port is used as the default port number.

compression_level

The gzip compression level. Setting this value to 0 disables compression. The compression level must be in the range of 1 (best speed) to 9 (best compression).

Increasing the compression level will reduce the network usage but will increase the cpu usage.

The default value is 3.

worker

The number of workers per configured host publishing events to Logstash. This is best used with load balancing mode enabled. Example: If you have 2 hosts and 3 workers, in total 6 workers are started (3 for each host).

loadbalance

If set to true and multiple Logstash hosts are configured, the output plugin load balances published events onto all Logstash hosts. If set to false, the output plugin sends all events to only one host (determined at random) and will switch to another host if the selected one becomes unresponsive. The default value is false.

ttl

Time to live for a connection to Logstash after which the connection will be re-established. Useful when Logstash hosts represent load balancers. Since the connections to Logstash hosts are sticky operating behind load balancers can lead to uneven load distribution between the instances. Specifying a TTL on the connection allows to achieve equal connection distribution between the instances. Specifying a TTL of 0 will disable this feature.

The default value is 0.

Note
The "ttl" option is not yet supported on an async Logstash client (one with the "pipelining" option set).
output.logstash:
  hosts: ["localhost:5044", "localhost:5045"]
  loadbalance: true
  index: {beatname_lc}
pipelining

Configures number of batches to be sent asynchronously to logstash while waiting for ACK from logstash. Output only becomes blocking once number of pipelining batches have been written. Pipelining is disabled if a values of 0 is configured. The default value is 0.

port

deprecated[5.0.0]

The default port to use if the port number is not given in hosts. The default port number is 10200.

proxy_url

The URL of the SOCKS5 proxy to use when connecting to the Logstash servers. The value must be a URL with a scheme of socks5://. The protocol used to communicate to Logstash is not based on HTTP so a web-proxy cannot be used.

If the SOCKS5 proxy server requires client authentication, then a username and password can be embedded in the URL as shown in the example.

When using a proxy, hostnames are resolved on the proxy server instead of on the client. You can change this behavior by setting the proxy_use_local_resolver option.

output.logstash:
  hosts: ["remote-host:5044"]
  proxy_url: socks5://user:password@socks5-proxy:2233
proxy_use_local_resolver

The proxy_use_local_resolver option determines if Logstash hostnames are resolved locally when using a proxy. The default value is false which means that when a proxy is used the name resolution occurs on the proxy server.

index

The index root name to write events to. The default is the Beat name. For example "{beatname_lc}" generates "[{beatname_lc}-]YYYY.MM.DD" indexes (for example, "{beatname_lc}-2015.04.26").

ssl

Configuration options for SSL parameters like the root CA for Logstash connections. See [configuration-ssl] for more information. To use SSL, you must also configure the Beats input plugin for Logstash to use SSL/TLS.

timeout

The number of seconds to wait for responses from the Logstash server before timing out. The default is 30 (seconds).

max_retries

The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped. Some Beats, such as Filebeat, ignore the max_retries setting and retry until all events are published.

Set max_retries to a value less than 0 to retry until all events are published.

The default is 3.

bulk_max_size

The maximum number of events to bulk in a single Logstash request. The default is 2048.

If the Beat sends single events, the events are collected into batches. If the Beat publishes a large batch of events (larger than the value specified by bulk_max_size), the batch is split.

Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in API errors, killed connections, timed-out publishing requests, and, ultimately, lower throughput.

Setting bulk_max_size to values less than or equal to 0 disables the splitting of batches. When splitting is disabled, the queue decides on the number of events to be contained in a batch.

slow_start

If enabled only a subset of events in a batch of events is transferred per transaction. The number of events to be sent increases up to bulk_max_size if no error is encountered. On error the number of events per transaction is reduced again.

The default is false.

Configure the Kafka output

Kafka

The Kafka output sends the events to Apache Kafka.

Example configuration:

output.kafka:
  # initial brokers for reading cluster metadata
  hosts: ["kafka1:9092", "kafka2:9092", "kafka3:9092"]

  # message topic selection + partitioning
  topic: '%{[type]}'
  partition.round_robin:
    reachable_only: false

  required_acks: 1
  compression: gzip
  max_message_bytes: 1000000
Note
Events bigger than max_message_bytes will be dropped. To avoid this problem, make sure {beatname_uc} does not generate events bigger than max_message_bytes.

Compatibility

This output works with Kafka 0.8, 0.9, and 0.10.

Configuration options

You can specify the following options in the kafka section of the {beatname_lc}.yml config file:

enabled

The enabled config is a boolean setting to enable or disable the output. If set to false, the output is disabled.

The default value is true.

hosts

The list of Kafka broker addresses from where to fetch the cluster metadata. The cluster metadata contain the actual Kafka brokers events are published to.

version

Kafka version ${beatname_lc} is assumed to run against. Defaults to oldest supported stable version (currently version 0.8.2.0).

Event timestamps will be added, if version 0.10.0.0+ is enabled.

Valid values are all kafka releases in between 0.8.2.0 and 0.11.0.0.

username

The username for connecting to Kafka. If username is configured, the password must be configured as well. Only SASL/PLAIN is supported.

password

The password for connecting to Kafka.

topic

The Kafka topic used for produced events. The setting can be a format string using any event field. To set the topic from document type use %{[type]}.

topics

Array of topic selector rules supporting conditionals, format string based field access and name mappings. The first rule matching will be used to set the topic for the event to be published. If topics is missing or no rule matches, the topic field will be used.

Rule settings:

topic: The topic format string to use. If the fields used are missing, the rule fails.

mapping: Dictionary mapping index names to new names

default: Default string value if mapping does not find a match.

when: Condition which must succeed in order to execute the current rule.

key

Optional Kafka event key. If configured, the event key must be unique and can be extracted from the event using a format string.

partition

Kafka output broker event partitioning strategy. Must be one of random, round_robin, or hash. By default the hash partitioner is used.

random.group_events: Sets the number of events to be published to the same partition, before the partitioner selects a new partition by random. The default value is 1 meaning after each event a new partition is picked randomly.

round_robin.group_events: Sets the number of events to be published to the same partition, before the partitioner selects the next partition. The default value is 1 meaning after each event the next partition will be selected.

hash.hash: List of fields used to compute the partitioning hash value from. If no field is configured, the events key value will be used.

hash.random: Randomly distribute events if no hash or key value can be computed.

All partitioners will try to publish events to all partitions by default. If a partition’s leader becomes unreachable for the beat, the output might block. All partitioners support setting reachable_only to overwrite this behavior. If reachable_only is set to true, events will be published to available partitions only.

Note
Publishing to a subset of available partitions potentially increases resource usage because events may become unevenly distributed.
client_id

The configurable ClientID used for logging, debugging, and auditing purposes. The default is "beats".

worker

The number of concurrent load-balanced Kafka output workers.

codec

Output codec configuration. If the codec section is missing, events will be json encoded.

See Configure the output codec for more information.

metadata

Kafka metadata update settings. The metadata do contain information about brokers, topics, partition, and active leaders to use for publishing.

refresh_frequency

Metadata refresh interval. Defaults to 10 minutes.

retry.max

Total number of metadata update retries when cluster is in middle of leader election. The default is 3.

retry.backoff

Waiting time between retries during leader elections. Default is 250ms.

max_retries

The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped. Some Beats, such as Filebeat, ignore the max_retries setting and retry until all events are published.

Set max_retries to a value less than 0 to retry until all events are published.

The default is 3.

bulk_max_size

The maximum number of events to bulk in a single Kafka request. The default is 2048.

timeout

The number of seconds to wait for responses from the Kafka brokers before timing out. The default is 30 (seconds).

broker_timeout

The maximum duration a broker will wait for number of required ACKs. The default is 10s.

channel_buffer_size

Per Kafka broker number of messages buffered in output pipeline. The default is 256.

keep_alive

The keep-alive period for an active network connection. If 0s, keep-alives are disabled. The default is 0 seconds.

compression

Sets the output compression codec. Must be one of none, snappy, lz4 and gzip. The default is gzip.

max_message_bytes

The maximum permitted size of JSON-encoded messages. Bigger messages will be dropped. The default value is 1000000 (bytes). This value should be equal to or less than the broker’s message.max.bytes.

required_acks

The ACK reliability level required from broker. 0=no response, 1=wait for local commit, -1=wait for all replicas to commit. The default is 1.

Note: If set to 0, no ACKs are returned by Kafka. Messages might be lost silently on error.

ssl

Configuration options for SSL parameters like the root CA for Kafka connections. See [configuration-ssl] for more information.

Configure the Redis output

Redis

The Redis output inserts the events into a Redis list or a Redis channel. This output plugin is compatible with the Redis input plugin for Logstash.

Example configuration:

output.redis:
  hosts: ["localhost"]
  password: "my_password"
  key: "{beatname_lc}"
  db: 0
  timeout: 5

Compatibility

This output works with Redis 3.2.4.

Configuration options

You can specify the following options in the redis section of the {beatname_lc}.yml config file:

enabled

The enabled config is a boolean setting to enable or disable the output. If set to false, the output is disabled.

The default value is true.

hosts

The list of Redis servers to connect to. If load balancing is enabled, the events are distributed to the servers in the list. If one server becomes unreachable, the events are distributed to the reachable servers only. You can define each Redis server by specifying HOST or HOST:PORT. For example: "192.15.3.2" or "test.redis.io:12345". If you don’t specify a port number, the value configured by port is used.

port

deprecated[5.0.0]

The Redis port to use if hosts does not contain a port number. The default is 6379.

index

deprecated[5.0.0,The index setting is renamed to key]

The name of the Redis list or channel the events are published to. The default is "{beatname_lc}".

key

The name of the Redis list or channel the events are published to. The default is "{beatname_lc}".

The redis key can be set dynamically using a format string accessing any fields in the event to be published.

This configuration will use the fields.list field to set the redis list key. If fields.list is missing, fallback will be used.

output.redis:
  hosts: ["localhost"]
  key: "%{[fields.list]:fallback}"
keys

Array of key selector configurations supporting conditionals, format string based field access and name mappings. The first rule matching will be used to set the key for the event to be published. If keys is missing or no rule matches, the key field will be used.

Rule settings:

key: The key format string. If the fields used in the format string are missing, the rule fails.

mapping: Dictionary mapping key values to new names

default: Default string value if mapping does not find a match.

when: Condition which must succeed in order to execute the current rule.

Example keys settings:

output.redis:
  hosts: ["localhost"]
  key: "default_list"
  keys:
    - key: "info_list"   # send to info_list if `message` field contains INFO
      when.contains:
        message: "INFO"
    - key: "debug_list"  # send to debug_list if `message` field contains DEBUG
      when.contains:
        message: "DEBUG"
    - key: "%{[type]}"
      mapping:
        "http": "frontend_list"
        "nginx": "frontend_list"
        "mysql": "backend_list"
password

The password to authenticate with. The default is no authentication.

db

The Redis database number where the events are published. The default is 0.

datatype

The Redis data type to use for publishing events.If the data type is list, the Redis RPUSH command is used and all events are added to the list with the key defined under key. If the data type channel is used, the Redis PUBLISH command is used and means that all events are pushed to the pub/sub mechanism of Redis. The name of the channel is the one defined under key. The default value is list.

codec

Output codec configuration. If the codec section is missing, events will be json encoded.

See Configure the output codec for more information.

host_topology

deprecated[5.0.0]

The Redis host to connect to when using topology map support. Topology map support is disabled if this option is not set.

password_topology

deprecated[5.0.0]

The password to use for authenticating with the Redis topology server. The default is no authentication.

db_topology

deprecated[5.0.0]

The Redis database number where the topology information is stored. The default is 1.

worker

The number of workers to use for each host configured to publish events to Redis. Use this setting along with the loadbalance option. For example, if you have 2 hosts and 3 workers, in total 6 workers are started (3 for each host).

loadbalance

If set to true and multiple hosts or workers are configured, the output plugin load balances published events onto all Redis hosts. If set to false, the output plugin sends all events to only one host (determined at random) and will switch to another host if the currently selected one becomes unreachable. The default value is true.

timeout

The Redis connection timeout in seconds. The default is 5 seconds.

max_retries

The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped. Some Beats, such as Filebeat, ignore the max_retries setting and retry until all events are published.

Set max_retries to a value less than 0 to retry until all events are published.

The default is 3.

bulk_max_size

The maximum number of events to bulk in a single Redis request or pipeline. The default is 2048.

If the Beat sends single events, the events are collected into batches. If the Beat publishes a large batch of events (larger than the value specified by bulk_max_size), the batch is split.

Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in API errors, killed connections, timed-out publishing requests, and, ultimately, lower throughput.

Setting bulk_max_size to values less than or equal to 0 disables the splitting of batches. When splitting is disabled, the queue decides on the number of events to be contained in a batch.

ssl

Configuration options for SSL parameters like the root CA for Redis connections guarded by SSL proxies (for example stunnel). See [configuration-ssl] for more information.

proxy_url

The URL of the SOCKS5 proxy to use when connecting to the Redis servers. The value must be a URL with a scheme of socks5://. You cannot use a web proxy because the protocol used to communicate with Redis is not based on HTTP.

If the SOCKS5 proxy server requires client authentication, you can embed a username and password in the URL.

When using a proxy, hostnames are resolved on the proxy server instead of on the client. You can change this behavior by setting the proxy_use_local_resolver option.

proxy_use_local_resolver

This option determines whether Redis hostnames are resolved locally when using a proxy. The default value is false, which means that name resolution occurs on the proxy server.

Configure the File output

File

The File output dumps the transactions into a file where each transaction is in a JSON format. Currently, this output is used for testing, but it can be used as input for Logstash.

output.file:
  path: "/tmp/{beatname_lc}"
  filename: {beatname_lc}
  #rotate_every_kb: 10000
  #number_of_files: 7
  #permissions: 0600

Configuration options

You can specify the following options in the file section of the {beatname_lc}.yml config file:

enabled

The enabled config is a boolean setting to enable or disable the output. If set to false, the output is disabled.

The default value is true.

path

The path to the directory where the generated files will be saved. This option is mandatory.

filename

The name of the generated files. The default is set to the Beat name. For example, the files generated by default for {beatname_uc} would be "{beatname_lc}", "{beatname_lc}.1", "{beatname_lc}.2", and so on.

rotate_every_kb

The maximum size in kilobytes of each file. When this size is reached, the files are rotated. The default value is 10240 KB.

number_of_files

The maximum number of files to save under path. When this number of files is reached, the oldest file is deleted, and the rest of the files are shifted from last to first. The default is 7 files.

permissions

Permissions to use for file creation. The default is 0600.

codec

Output codec configuration. If the codec section is missing, events will be json encoded.

See Configure the output codec for more information.

Configure the Console output

Console

The Console output writes events in JSON format to stdout.

output.console:
  pretty: true

Configuration options

You can specify the following options in the console section of the {beatname_lc}.yml config file:

pretty

If pretty is set to true, events written to stdout will be nicely formatted. The default is false.

codec

Output codec configuration. If the codec section is missing, events will be json encoded using the pretty option.

See Configure the output codec for more information.

enabled

The enabled config is a boolean setting to enable or disable the output. If set to false, the output is disabled.

The default value is true.

bulk_max_size

The maximum number of events to buffer internally during publishing. The default is 2048.

Specifying a larger batch size may add some latency and buffering during publishing. However, for Console output, this setting does not affect how events are published.

Setting bulk_max_size to values less than or equal to 0 disables the splitting of batches. When splitting is disabled, the queue decides on the number of events to be contained in a batch.

Configure the output codec

Output codec

For outputs that do not require a specific encoding, you can change the encoding by using the codec configuration. You can specify either the json or format codec. By default the json codec is used.

json.pretty: If pretty is set to true, events will be nicely formatted. The default is false.

Example configuration that uses the json codec with pretty printing enabled to write events to the console:

output.console:
  codec.json:
    pretty: true

format.string: Configurable format string used to create a custom formatted message.

Example configurable that uses the format codec to print the events timestamp and message field to console:

output.console:
  codec.format:
    string: '%{[@timestamp]} %{[message]}'

Configure the output for the Elastic Cloud

Cloud

{beatname_uc} comes with two settings that simplify the output configuration when used together with Elastic Cloud. When defined, these setting overwrite settings from other parts in the configuration.

Example:

cloud.id: "staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRjZWM2ZjI2MWE3NGJmMjRjZTMzYmI4ODExYjg0Mjk0ZiRjNmMyY2E2ZDA0MjI0OWFmMGNjN2Q3YTllOTYyNTc0Mw=="
cloud.auth: "elastic:{pwd}"

These settings can be also specified at the command line, like this:

{beatname_lc} -e -E cloud.id="" -E cloud.auth=""

cloud.id

The Cloud ID, which can be found in the Elastic Cloud web console, is used by {beatname_uc} to resolve the Elasticsearch and Kibana URLs. This setting overwrites the output.elasticsearch.hosts and setup.kibana.host settings.

cloud.auth

When specified, the cloud.auth overwrites the output.elasticsearch.username and output.elasticsearch.password settings. Because the Kibana settings inherit the username and password from the Elasticsearch output, this can also be used to set the setup.kibana.username and setup.kibana.password options.