From 86807482fbdafe0ee276e903a926179eb07c4421 Mon Sep 17 00:00:00 2001 From: urso Date: Thu, 19 Apr 2018 21:25:08 +0200 Subject: [PATCH 1/4] Add spool docs --- libbeat/docs/queueconfig.asciidoc | 153 +++++++++++++++++++++++++++++- 1 file changed, 149 insertions(+), 4 deletions(-) diff --git a/libbeat/docs/queueconfig.asciidoc b/libbeat/docs/queueconfig.asciidoc index 6d1d17388af..2931d2e6fb1 100644 --- a/libbeat/docs/queueconfig.asciidoc +++ b/libbeat/docs/queueconfig.asciidoc @@ -6,7 +6,9 @@ queue is responsible for buffering and combining events into batches that can be consumed by the outputs. The outputs will use bulk operations to send a batch of events in one transaction. -You can configure the type and behavior of the internal queue by setting options in the `queue` section of the +{beatname_lc}.yml+ config file. +You can configure the type and behavior of the internal queue by setting +options in the `queue` section of the +{beatname_lc}.yml+ config file. Only one +queue type can be configured. Example configuration: @@ -21,8 +23,7 @@ queue.mem: [[configuration-internal-queue-memory]] === Configure the memory queue -The memory queue keeps all events in memory. It is the only queue type -supported right now. +The memory queue keeps all events in memory. If no flush interval and no number of events to flush is configured, all events published to this queue will be directly consumed by the outputs. @@ -33,7 +34,7 @@ By default `flush.min.events` is set to 2048 and `flush.timeout` is set to 1s. The output's `bulk_max_size` setting limits the number of events being processed at once. The memory queue waits for the output to acknowledge or drop events. If -the queue is full, no new events can be inserted into the memeory queue. Only +the queue is full, no new events can be inserted into the memory queue. Only after the signal from the output will the queue free up space for more events to be accepted. This sample configuration forwards events to the output if 512 events are @@ -76,3 +77,147 @@ will be immediately available for consumption. The default values is 0s. +[float] +[[configuration-internal-queue-spool]] +=== Configure the file spool queue + +The file spooling queue stores all events in an on disk ring buffer. The spool +has a write buffer, which new events are written to. Events written to the +spool are forwarded to the outputs, only after the write buffer has been +flushed successfully. + +The spool waits for the output to acknowledge or drop events. If the spool is +full, no new events can be inserted. The spool will block. Space is freed only +after a signal from the output has been received. + +On disk, the spool operates in pages. The `file.page_size` setting configures +the files page size on creation time. The optimal page size depends on the +effective block size, used by the underlying file system. + +This sample configuration enables the spool with all default settings and the +default file path: + +[source,yaml] +------------------------------------------------------------------------------ +queue.spool: ~ +------------------------------------------------------------------------------ + +This sample configuration creates a spool of 512MiB, with 16KiB pages. The +write buffer is flushed if 10MiB of contents, or 1024 events have been +written. If oldest available event is already waiting for 5s in the write +buffer, the buffer will be flushed as well: + +[source,yaml] +------------------------------------------------------------------------------ +queue.spool: + file: + path: "${path.data}/spool.dat" + size: 512MiB + page_size: 16KiB + write: + buffer_size: 10MiB + flush.timeout: 5s + flush.events: 1024 +------------------------------------------------------------------------------ + +[float] +==== Configuration options + +You can specify the following options in the `queue.spool` section of the ++{beatname_lc}.yml+ config file: + +[float] +===== `file.path` + +The spool file path. The file is created on startup, if it does not exist. + +The default value is "${path.data}/spool.dat". + +===== `file.permissions` + +The file permissions. The permissions are applied when the file is +created. In case the file already exists, The file permissions are compared +with `file.permissions`. The spool file is not opened if the actual file +permissions are more permissive then configured. + +The default value is 0600. + + +===== `file.size` + +Spool file size. + +The default value is 100 MiB. + +Note: The size should be much larger then the expected event sizes +and write buffer size. Otherwise the queue will block, because it has not +enough space. + +Note: The file size can not be changed once the file has been generated. This +limitation will be removed in the future. + + +===== `file.page_size` + +The files page size. + +The spool file is split into multiple pages of `page_size`. All write +operations write a complete page. + +The default value is 4096 (4KiB). + +Note: This setting should match the file systems minimum block size. If the +`page_size` is not a multiple of the file systems block size, the file system +might create additional read operations on writes. + +Note: The page size is only set on file creation time. It can not be changed +afterwards. + + +===== `file.prealloc` + +If `prealloc` is set to `true`, truncate is used to reserve the space up to +`file.size`. This setting is only used when the file is created. + +The file will dynamically grow, if `prealloc` is set to false. The spool +blocks, if `prealloc` is `false` and the system is out of disk space. + +The default value is `true`. + +===== `write.buffer_size` + +The write buffer size. The write buffer is flushed, once the buffer size is exceeded. + +Very big events are allowed to be bigger then the configured buffer size. But +the write buffer will be flushed right after the event has been serialized. + +The default value is 1MiB. + +===== `write.codec` + +The event encoding used for serialized events. Valid values are `json` and `cbor`. + +The default value is `cbor`. + +===== `write.flush.timeout` + +Maximum wait time of the oldest event in the write buffer. If set to 0, the +write buffer will only be flushed once `write.flush.events` or `write.buffer_size` is fulfilled. + +The default value is 1s. + +===== `write.flush.events` + +Number of buffered events. The write buffer is flushed once the limit is reached. + +The default value is 16384. + +===== `read.flush.timeout` + +If configured, the `read.flush.timeout` ensures the spool will wait for more +events to be flushed to the spool file, If the number of available events is +less then the outputs `bulk_max_size`. + +If `read.flush.timeout` is 0, all available events are forwarded to the output immediately. + +The default value is 0s. From 2f8c693a49b4ea77fb595ebeb048f3087f5e3a25 Mon Sep 17 00:00:00 2001 From: urso Date: Wed, 25 Apr 2018 14:05:05 +0200 Subject: [PATCH 2/4] review --- libbeat/docs/queueconfig.asciidoc | 48 +++++++++++++++++-------------- 1 file changed, 27 insertions(+), 21 deletions(-) diff --git a/libbeat/docs/queueconfig.asciidoc b/libbeat/docs/queueconfig.asciidoc index 2931d2e6fb1..a3ea5bf8578 100644 --- a/libbeat/docs/queueconfig.asciidoc +++ b/libbeat/docs/queueconfig.asciidoc @@ -11,7 +11,7 @@ options in the `queue` section of the +{beatname_lc}.yml+ config file. Only one queue type can be configured. -Example configuration: +This sample configurations configures the memory queue, buffering up to 4096 events: [source,yaml] ------------------------------------------------------------------------------ @@ -38,7 +38,7 @@ the queue is full, no new events can be inserted into the memory queue. Only after the signal from the output will the queue free up space for more events to be accepted. This sample configuration forwards events to the output if 512 events are -available or the oldest available event is already waiting for 5s in the queue: +available or the oldest available event has been waiting for 5s in the queue: [source,yaml] ------------------------------------------------------------------------------ @@ -81,7 +81,7 @@ The default values is 0s. [[configuration-internal-queue-spool]] === Configure the file spool queue -The file spooling queue stores all events in an on disk ring buffer. The spool +The file spool queue stores all events in an on disk ring buffer. The spool has a write buffer, which new events are written to. Events written to the spool are forwarded to the outputs, only after the write buffer has been flushed successfully. @@ -90,11 +90,12 @@ The spool waits for the output to acknowledge or drop events. If the spool is full, no new events can be inserted. The spool will block. Space is freed only after a signal from the output has been received. -On disk, the spool operates in pages. The `file.page_size` setting configures -the files page size on creation time. The optimal page size depends on the -effective block size, used by the underlying file system. +On disk, the spool divides a file into pages. The `file.page_size` setting +configures the file's page size at file creation time. The optimal page size depends +on the effective block size, used by the underlying file system. -This sample configuration enables the spool with all default settings and the +This sample configuration enables the spool with all default settings (See +<> for defaults) and the default file path: [source,yaml] @@ -104,7 +105,7 @@ queue.spool: ~ This sample configuration creates a spool of 512MiB, with 16KiB pages. The write buffer is flushed if 10MiB of contents, or 1024 events have been -written. If oldest available event is already waiting for 5s in the write +written. If the oldest available event has been waiting for 5s in the write buffer, the buffer will be flushed as well: [source,yaml] @@ -121,6 +122,7 @@ queue.spool: ------------------------------------------------------------------------------ [float] +[[configuration-internal-queue-spool-reference]] ==== Configuration options You can specify the following options in the `queue.spool` section of the @@ -136,7 +138,7 @@ The default value is "${path.data}/spool.dat". ===== `file.permissions` The file permissions. The permissions are applied when the file is -created. In case the file already exists, The file permissions are compared +created. In case the file already exists, the file permissions are compared with `file.permissions`. The spool file is not opened if the actual file permissions are more permissive then configured. @@ -149,28 +151,28 @@ Spool file size. The default value is 100 MiB. -Note: The size should be much larger then the expected event sizes +NOTE: The size should be much larger then the expected event sizes and write buffer size. Otherwise the queue will block, because it has not enough space. -Note: The file size can not be changed once the file has been generated. This +NOTE: The file size cannot be changed once the file has been generated. This limitation will be removed in the future. ===== `file.page_size` -The files page size. +The file's page size. -The spool file is split into multiple pages of `page_size`. All write -operations write a complete page. +The spool file is split into pages of `page_size`. All I/O +operations operate on complete pages. The default value is 4096 (4KiB). -Note: This setting should match the file systems minimum block size. If the -`page_size` is not a multiple of the file systems block size, the file system +NOTE: This setting should match the file system's minimum block size. If the +`page_size` is not a multiple of the file system's block size, the file system might create additional read operations on writes. -Note: The page size is only set on file creation time. It can not be changed +NOTE: The page size is only set at file creation time. It cannot be changed afterwards. @@ -214,10 +216,14 @@ The default value is 16384. ===== `read.flush.timeout` -If configured, the `read.flush.timeout` ensures the spool will wait for more -events to be flushed to the spool file, If the number of available events is -less then the outputs `bulk_max_size`. +The spool reader tries to read up to the output's `bulk_max_size` events at once. -If `read.flush.timeout` is 0, all available events are forwarded to the output immediately. +If `read.flush.timeout` is set to 0s, all available events are forwarded +immediately to the output. + +If `read.flush.timeout` is set to a value bigger then 0s, the spool will wait +for more events to be flushed. Events are forwarded to the output if +`bulk_max_size` events have been read or the oldest read event has been waiting +for the configured duration. The default value is 0s. From 3dd841e8c300a0eee912b04211fcca3bc0e86c78 Mon Sep 17 00:00:00 2001 From: urso Date: Thu, 31 May 2018 16:25:37 +0200 Subject: [PATCH 3/4] review (phrasing) --- libbeat/docs/queueconfig.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/libbeat/docs/queueconfig.asciidoc b/libbeat/docs/queueconfig.asciidoc index a3ea5bf8578..79ed8083004 100644 --- a/libbeat/docs/queueconfig.asciidoc +++ b/libbeat/docs/queueconfig.asciidoc @@ -11,7 +11,7 @@ options in the `queue` section of the +{beatname_lc}.yml+ config file. Only one queue type can be configured. -This sample configurations configures the memory queue, buffering up to 4096 events: +This sample configuration sets the memory queue to buffer up to 4096 events: [source,yaml] ------------------------------------------------------------------------------ From fbb2c082c8ebb6d961ed6422889361f23f6e093d Mon Sep 17 00:00:00 2001 From: urso Date: Thu, 31 May 2018 18:53:55 +0200 Subject: [PATCH 4/4] Add missing [float] --- libbeat/docs/queueconfig.asciidoc | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/libbeat/docs/queueconfig.asciidoc b/libbeat/docs/queueconfig.asciidoc index 79ed8083004..df567114bc1 100644 --- a/libbeat/docs/queueconfig.asciidoc +++ b/libbeat/docs/queueconfig.asciidoc @@ -135,6 +135,7 @@ The spool file path. The file is created on startup, if it does not exist. The default value is "${path.data}/spool.dat". +[float] ===== `file.permissions` The file permissions. The permissions are applied when the file is @@ -145,6 +146,7 @@ permissions are more permissive then configured. The default value is 0600. +[float] ===== `file.size` Spool file size. @@ -158,7 +160,7 @@ enough space. NOTE: The file size cannot be changed once the file has been generated. This limitation will be removed in the future. - +[float] ===== `file.page_size` The file's page size. @@ -175,7 +177,7 @@ might create additional read operations on writes. NOTE: The page size is only set at file creation time. It cannot be changed afterwards. - +[float] ===== `file.prealloc` If `prealloc` is set to `true`, truncate is used to reserve the space up to @@ -186,6 +188,7 @@ blocks, if `prealloc` is `false` and the system is out of disk space. The default value is `true`. +[float] ===== `write.buffer_size` The write buffer size. The write buffer is flushed, once the buffer size is exceeded. @@ -195,12 +198,14 @@ the write buffer will be flushed right after the event has been serialized. The default value is 1MiB. +[float] ===== `write.codec` The event encoding used for serialized events. Valid values are `json` and `cbor`. The default value is `cbor`. +[float] ===== `write.flush.timeout` Maximum wait time of the oldest event in the write buffer. If set to 0, the @@ -208,12 +213,14 @@ write buffer will only be flushed once `write.flush.events` or `write.buffer_siz The default value is 1s. +[float] ===== `write.flush.events` Number of buffered events. The write buffer is flushed once the limit is reached. The default value is 16384. +[float] ===== `read.flush.timeout` The spool reader tries to read up to the output's `bulk_max_size` events at once.