diff --git a/3rdparty-backend-licenses.txt b/3rdparty-backend-licenses.txt index 63aa95d3..aa8deb23 100644 --- a/3rdparty-backend-licenses.txt +++ b/3rdparty-backend-licenses.txt @@ -15207,3 +15207,30 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ############################################################ +############################################################ +github.com/k-sone/ipmigo + +MIT License + +Copyright (c) 2019 Keita Sone + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. +############################################################ + + diff --git a/api/websocket-render-responses.md b/api/websocket-render-responses.md index 3661e1f3..0b1dbd48 100644 --- a/api/websocket-render-responses.md +++ b/api/websocket-render-responses.md @@ -1,6 +1,26 @@ # Response formats -Although all modules respond to the same commands, the format in which they return entries differs due to the differing nature of the data types involved. The responses described here are common to RESP_GET_ENTRIES, RESP_STREAMING, and RESP_TS_RANGE; we use these request/response IDs indiscriminately through the examples in this section. +Although all modules respond to the same commands, the format in which they return entries differs due to the differing nature of the data types involved. The responses described here are common to RESP_GET_ENTRIES, RESP_STREAMING, and RESP_TS_RANGE; we use these request/response IDs through the examples in this section. + +## Render Store Limits + +Note that Gravwell has limits in place to prevent users from consuming too much disk space with query results. By default, searches can generate a maximum 1GB of output; this is configurable through the `Render-Store-Limit` parameter in `gravwell.conf`. Once the limit is exceeded, the renderer will stop storing results, but will otherwise allow the search to complete. + +All search socket response messages may contain the fields `OverLimit` and `LimitDroppedRange`. OverLimit is a boolean, set to true if the search results exceeded the limits. LimitDroppedRange indicates what, if any, time range of results has been dropped. + +Here is an example from a search which exceeded the limits: + +``` +"OverLimit":true, +"LimitDroppedRange":{"StartTS":"2021-03-11T09:10:54.199-08:00","EndTS":"2021-03-11T09:36:00-08:00"}, +``` + +Here is an example of what you might see for a search which does *not* exceed the limits: + +``` +"OverLimit":false, +"LimitDroppedRange":{"StartTS":"0000-12-31T16:07:02-07:52","EndTS":"0000-12-31T16:07:02-07:52"}, +``` ## Text & raw module responses @@ -12,6 +32,8 @@ The 'text' and 'raw' render modules return their entries as an array in a field "EntryCount": 1575, "AdditionalEntries": false, "Finished": true, + "OverLimit":false, + "LimitDroppedRange":{"StartTS":"0000-12-31T16:07:02-07:52","EndTS":"0000-12-31T16:07:02-07:52"}, "Entries": [ { "TS":"2018-04-02T16:16:39-06:00", @@ -47,6 +69,8 @@ The table module returns the entries in a field called "Entries", containing a s "EntryCount": 1575, "AdditionalEntries": false, "Finished": true, + "OverLimit":false, + "LimitDroppedRange":{"StartTS":"0000-12-31T16:07:02-07:52","EndTS":"0000-12-31T16:07:02-07:52"}, "Entries": { "Rows": [ { @@ -83,6 +107,8 @@ The gauge module returns entries as an array of structures containing the gauge' "EntryCount": 1, "AdditionalEntries": true, "Finished": true, + "OverLimit":false, + "LimitDroppedRange":{"StartTS":"0000-12-31T16:07:02-07:52","EndTS":"0000-12-31T16:07:02-07:52"}, "Entries": [ { "Name": "mean", @@ -108,53 +134,58 @@ should produce a result like this: ``` { - "AdditionalEntries": true, - "Entries": [ - { - "DstLocation": "33.381516 -108.391164", - "Magnitude": 420471, - "SrcLocation": "34.054400 -118.244000", - "Values": [ - "151.11.24.133", - "192.168.2.60" - ] - }, - { - "DstLocation": "33.381516 -108.391164", - "Magnitude": 373204, - "SrcLocation": "52.382400 5.899500", - "Values": [ - "185.19.10.154", - "192.168.2.60" - ] - }, - { - "DstLocation": "33.381516 -108.391164", - "Magnitude": 246593, - "SrcLocation": "39.048100 -76.472800", - "Values": [ - "53.1.11.28", - "192.168.2.60" - ] - }, + "AdditionalEntries": true, + "Entries": [ + { + "DstLocation": "33.381516 -108.391164", + "Magnitude": 420471, + "SrcLocation": "34.054400 -118.244000", + "Values": [ + "151.11.24.133", + "192.168.2.60" + ] + }, + { + "DstLocation": "33.381516 -108.391164", + "Magnitude": 373204, + "SrcLocation": "52.382400 5.899500", + "Values": [ + "185.19.10.154", + "192.168.2.60" + ] + }, + { + "DstLocation": "33.381516 -108.391164", + "Magnitude": 246593, + "SrcLocation": "39.048100 -76.472800", + "Values": [ + "53.1.11.28", + "192.168.2.60" + ] + }, [...] - { - "DstLocation": "32.769700 -122.393300", - "Magnitude": 8662, - "SrcLocation": "33.381516 -108.391164", - "Values": [ - "192.168.2.60", - "192.33.23.124" - ] - } - ], - "EntryCount": 16, - "Finished": true, - "ID": 18, - "ValueNames": [ - "SrcIP", - "DstIP" - ] + { + "DstLocation": "32.769700 -122.393300", + "Magnitude": 8662, + "SrcLocation": "33.381516 -108.391164", + "Values": [ + "192.168.2.60", + "192.33.23.124" + ] + } + ], + "OverLimit": false, + "LimitDroppedRange": { + "StartTS": "0000-12-31T16:07:02-07:52", + "EndTS": "0000-12-31T16:07:02-07:52" + }, + "EntryCount": 16, + "Finished": true, + "ID": 18, + "ValueNames": [ + "SrcIP", + "DstIP" + ] } ``` @@ -168,9 +199,14 @@ The chart module returns entries in a field called "Entries", containing a struc { "EntryCount": 5, "Finished": true, - "ID": 18 + "ID": 18, "AdditionalEntries": false, - "Entries": { + "OverLimit": false, + "LimitDroppedRange": { + "StartTS": "0000-12-31T16:07:02-07:52", + "EndTS": "0000-12-31T16:07:02-07:52" + }, + "Entries": { "Names": [ "10.177.98.189", "192.168.1.101", @@ -230,195 +266,201 @@ Groups are defined in the search and are used to color nodes in the fdg display. ``` { - "AdditionalEntries": false, - "Entries": { - "groups": [ - "", - "operations", - "IT" - ], - "links": [ - { - "source": 0, - "target": 1, - "value": 1 - }, - { - "source": 2, - "target": 1, - "value": 1 - }, - { - "source": 3, - "target": 1, - "value": 1 - }, - { - "source": 2, - "target": 4, - "value": 1 - }, - { - "source": 4, - "target": 5, - "value": 1 - }, - { - "source": 0, - "target": 6, - "value": 1 - }, - { - "source": 2, - "target": 7, - "value": 1 - }, - { - "source": 2, - "target": 8, - "value": 1 - }, - { - "source": 9, - "target": 8, - "value": 1 - }, - { - "source": 10, - "target": 5, - "value": 1 - }, - { - "source": 11, - "target": 8, - "value": 1 - }, - { - "source": 2, - "target": 12, - "value": 1 - }, - { - "source": 2, - "target": 13, - "value": 1 - }, - { - "source": 14, - "target": 12, - "value": 1 - }, - { - "source": 15, - "target": 12, - "value": 1 - }, - { - "source": 16, - "target": 12, - "value": 1 - }, - { - "source": 17, - "target": 13, - "value": 1 - }, - { - "source": 18, - "target": 13, - "value": 1 - }, - { - "source": 13, - "target": 19, - "value": 1 - } - ], - "nodes": [ - { - "group": 0, - "name": "bbd307455de9" - }, - { - "group": 1, - "name": "operations-5 OPERATIONS-5$" - }, - { - "group": 0, - "name": "9b10deadbeef" - }, - { - "group": 0, - "name": "db48a5920a82" - }, - { - "group": 2, - "name": "desktop-2 DESKTOP-2$" - }, - { - "group": 2, - "name": "e758bb7d2630" - }, - { - "group": 1, - "name": "operations-2 OPERATIONS-2$" - }, - { - "group": 2, - "name": "desktop-3 DESKTOP-3$" - }, - { - "group": 2, - "name": "desktop-4 DESKTOP-4$" - }, - { - "group": 0, - "name": "4f194d5cf71a" - }, - { - "group": 0, - "name": "desktop-1 DESKTOP-1$" - }, - { - "group": 0, - "name": "6" - }, - { - "group": 1, - "name": "operation-desktop OPERATION-DESKT$" - }, - { - "group": 2, - "name": "DESKTOP-67T38GD DESKTOP-67T38GD$" - }, - { - "group": 0, - "name": "2fd6276575c7" - }, - { - "group": 0, - "name": "dfc56224743c" - }, - { - "group": 0, - "name": "cb7b71a72272" - }, - { - "group": 0, - "name": "2f01fbc81c46" - }, - { - "group": 0, - "name": "379bd32ecec6" - }, - { - "group": 2, - "name": "foobar" - } - ] - }, - "EntryCount": 19, - "Finished": true, - "ID": 18 + "AdditionalEntries": false, + "OverLimit": false, + "LimitDroppedRange": { + "StartTS": "0000-12-31T16:07:02-07:52", + "EndTS": "0000-12-31T16:07:02-07:52" + }, + "Entries": { + "groups": [ + "", + "operations", + "IT" + ], + "links": [ + { + "source": 0, + "target": 1, + "value": 1 + }, + { + "source": 2, + "target": 1, + "value": 1 + }, + { + "source": 3, + "target": 1, + "value": 1 + }, + { + "source": 2, + "target": 4, + "value": 1 + }, + { + "source": 4, + "target": 5, + "value": 1 + }, + { + "source": 0, + "target": 6, + "value": 1 + }, + { + "source": 2, + "target": 7, + "value": 1 + }, + { + "source": 2, + "target": 8, + "value": 1 + }, + { + "source": 9, + "target": 8, + "value": 1 + }, + { + "source": 10, + "target": 5, + "value": 1 + }, + { + "source": 11, + "target": 8, + "value": 1 + }, + { + "source": 2, + "target": 12, + "value": 1 + }, + { + "source": 2, + "target": 13, + "value": 1 + }, + { + "source": 14, + "target": 12, + "value": 1 + }, + { + "source": 15, + "target": 12, + "value": 1 + }, + { + "source": 16, + "target": 12, + "value": 1 + }, + { + "source": 17, + "target": 13, + "value": 1 + }, + { + "source": 18, + "target": 13, + "value": 1 + }, + { + "source": 13, + "target": 19, + "value": 1 + } + ], + "nodes": [ + { + "group": 0, + "name": "bbd307455de9" + }, + { + "group": 1, + "name": "operations-5 OPERATIONS-5$" + }, + { + "group": 0, + "name": "9b10deadbeef" + }, + { + "group": 0, + "name": "db48a5920a82" + }, + { + "group": 2, + "name": "desktop-2 DESKTOP-2$" + }, + { + "group": 2, + "name": "e758bb7d2630" + }, + { + "group": 1, + "name": "operations-2 OPERATIONS-2$" + }, + { + "group": 2, + "name": "desktop-3 DESKTOP-3$" + }, + { + "group": 2, + "name": "desktop-4 DESKTOP-4$" + }, + { + "group": 0, + "name": "4f194d5cf71a" + }, + { + "group": 0, + "name": "desktop-1 DESKTOP-1$" + }, + { + "group": 0, + "name": "6" + }, + { + "group": 1, + "name": "operation-desktop OPERATION-DESKT$" + }, + { + "group": 2, + "name": "DESKTOP-67T38GD DESKTOP-67T38GD$" + }, + { + "group": 0, + "name": "2fd6276575c7" + }, + { + "group": 0, + "name": "dfc56224743c" + }, + { + "group": 0, + "name": "cb7b71a72272" + }, + { + "group": 0, + "name": "2f01fbc81c46" + }, + { + "group": 0, + "name": "379bd32ecec6" + }, + { + "group": 2, + "name": "foobar" + } + ] + }, + "EntryCount": 19, + "Finished": true, + "ID": 18 } + ``` \ No newline at end of file diff --git a/api/websocket-render.md b/api/websocket-render.md index 4dfb7a0d..dabd0ef5 100644 --- a/api/websocket-render.md +++ b/api/websocket-render.md @@ -206,6 +206,8 @@ The response contains stats information and information about the search itself: "EntryCount": 1575, "AdditionalEntries": false, "Finished": true + "OverLimit":false, + "LimitDroppedRange":{"StartTS":"0000-12-31T16:07:02-07:52","EndTS":"0000-12-31T16:07:02-07:52"}, } ``` @@ -257,6 +259,8 @@ The server responds with an array of entries and additional information. "EntryCount": 1575, "AdditionalEntries": false, "Finished": true, + "OverLimit":false, + "LimitDroppedRange":{"StartTS":"0000-12-31T16:07:02-07:52","EndTS":"0000-12-31T16:07:02-07:52"}, "Entries": { "Rows": [ { @@ -305,6 +309,8 @@ The renderer will begin send large blocks of entries as quickly as it can: "EntryCount": 1000, "AdditionalEntries": false, "Finished": false, + "OverLimit":false, + "LimitDroppedRange":{"StartTS":"0000-12-31T16:07:02-07:52","EndTS":"0000-12-31T16:07:02-07:52"}, "Entries": [ <1000 entries elided> ] @@ -315,6 +321,8 @@ The renderer will begin send large blocks of entries as quickly as it can: "EntryCount": 861, "AdditionalEntries": false, "Finished": false, + "OverLimit":false, + "LimitDroppedRange":{"StartTS":"0000-12-31T16:07:02-07:52","EndTS":"0000-12-31T16:07:02-07:52"}, "Entries": [ <861 entries elided> ] @@ -324,6 +332,8 @@ The renderer will begin send large blocks of entries as quickly as it can: "EntryCount": 0, "AdditionalEntries": false, "Finished": true, + "OverLimit":false, + "LimitDroppedRange":{"StartTS":"0000-12-31T16:07:02-07:52","EndTS":"0000-12-31T16:07:02-07:52"}, "Entries": [] } ``` @@ -356,6 +366,8 @@ The server responds with entries which fall within the requested time: "EntryCount":1575, "AdditionalEntries":false, "Finished":true, + "OverLimit":false, + "LimitDroppedRange":{"StartTS":"0000-12-31T16:07:02-07:52","EndTS":"0000-12-31T16:07:02-07:52"}, "Entries": { "Rows": [ { @@ -476,6 +488,8 @@ The response contains only a single Stats entry: ], "Size": 2 }, + "OverLimit":false, + "LimitDroppedRange":{"StartTS":"0000-12-31T16:07:02-07:52","EndTS":"0000-12-31T16:07:02-07:52"}, "EntryCount": 510000 } ``` @@ -530,6 +544,8 @@ Response: ], "Size": 100 }, + "OverLimit":false, + "LimitDroppedRange":{"StartTS":"0000-12-31T16:07:02-07:52","EndTS":"0000-12-31T16:07:02-07:52"}, "EntryCount": 500000 } ``` @@ -551,6 +567,8 @@ Response: "AdditionalEntries": false, "EntryCount": 1575, "Finished": true, + "OverLimit":false, + "LimitDroppedRange":{"StartTS":"0000-12-31T16:07:02-07:52","EndTS":"0000-12-31T16:07:02-07:52"}, "ID": 2130706437, "Stats": { "Set": [ @@ -609,6 +627,8 @@ Response: "EntryCount": 1575, "AdditionalEntries": false, "Finished": true, + "OverLimit":false, + "LimitDroppedRange":{"StartTS":"0000-12-31T16:07:02-07:52","EndTS":"0000-12-31T16:07:02-07:52"}, "Metadata": { "ValueStats": [ { diff --git a/changelog/4.1.5.md b/changelog/4.1.5.md new file mode 100644 index 00000000..9d5272ae --- /dev/null +++ b/changelog/4.1.5.md @@ -0,0 +1,28 @@ +# Changelog for version 4.1.5 + +### Released March 31 2021 + +## Backend Changes +* Added the [transaction module](#!search/transaction/transaction.md) +* Modified webserver cache code to log accesses to cached remote resources. +* Fixed corner case with gauge module argument parsing. +* Fixed problem with enumerated value hinting when using the syslog module and fulltext acceleration. +* Fixed rare failure state that led to replication stalling. +* Fixed bug in shard recovery code that could lead to crashes. +* Added code to invalidate all existing sessions when Gravwell is restored from a backup. + +## Frontend Changes +* Added Backup/Restore feature for admins. +* Made various improvements to style and usability. +* Fixed a style issue related to starting a new search before an old search is finished. +* Fixed an issue with running searches over Unix timestamp range. +* Fixed an issue where search suggestions wouldn't show in some situations. +* Fixed a display issue with collapsing charts. +* Fixed an issue where zoomed time frames were lost on search relaunch. +* Added support for displaying new lines and tabs in table cells. + +## Ingesters & API Changes +* Added IPMI ingester. +* Added HEC-compatible receiver to HTTP ingester. +* Added Ingester-Name field to ingester configs, to provide a user-friendly name for the ingester. +* Added fields to render module response types that indicate if the search exceeded storage limits. diff --git a/changelog/list.md b/changelog/list.md index bab17029..df1afe67 100644 --- a/changelog/list.md +++ b/changelog/list.md @@ -2,10 +2,11 @@ ## Current Version -[4.1.4](4.1.4.md) +* [4.1.5](4.1.5.md) ## Previous Versions +* [4.1.4](4.1.4.md) * [4.1.3](4.1.3.md) * [4.1.2](4.1.2.md) * [4.1.1](4.1.1.md) diff --git a/configuration/docker.md b/configuration/docker.md index 8186d01e..cd404634 100644 --- a/configuration/docker.md +++ b/configuration/docker.md @@ -89,11 +89,7 @@ We can then run a quick search over the last hour to verify that the data made i ## Set up ingesters -Besides the Simple Relay ingester that ships with the gravwell/gravwell image, we currently provide three pre-built standalone ingester images: - -* [gravwell/netflow_capture](https://hub.docker.com/r/gravwell/netflow_capture/) is a Netflow collector, configured to receive Netflow v5 records on port 2055 and and IPFIX records on port 6343 -* [gravwell/collectd](https://hub.docker.com/r/gravwell/collectd/) receives hardware stats from collectd acquisition points on port 25826 -* [gravwell/simple_relay](https://hub.docker.com/r/gravwell/simple_relay/) is the Simple Relay ingester as pre-installed on the core image, in case you want to deploy it separately too. +Besides the Simple Relay ingester that ships with the gravwell/gravwell image, we provide a number of pre-build images for our ingesters. More information can be found at the [Gravwell Docker Hub](https://hub.docker.com/u/gravwell) page. We'll launch the Netflow ingester here, but the same command (with names and ports changed) can be used for the other ingesters too: diff --git a/ingesters/ingesters.md b/ingesters/ingesters.md index d426ebf2..2ce6fe88 100644 --- a/ingesters/ingesters.md +++ b/ingesters/ingesters.md @@ -22,11 +22,11 @@ From the user's point of view, tags are strings such as "syslog", "pcap-router", !@#$%^&*()=+<>,.:;"'{}[]|\ ``` -You should also refrain from using nonprinting or difficult-to-type characters when selecting tag names, as this will make querying a challenge for users. Although you *could* ingest into a tag named ☺, that doesn't mean it's a good idea! +You should also refrain from using non-printing or difficult-to-type characters when selecting tag names, as this will make querying a challenge for users. Although you *could* ingest into a tag named ☺, that doesn't mean it's a good idea! ### Tag Wildcards -When chosing tag names, keep in mind that Gravwell allows wildcards when specifying tag names to query. By selecting your tag names carefully, you can make later querying easier. +When choosing tag names, keep in mind that Gravwell allows wildcards when specifying tag names to query. By selecting your tag names carefully, you can make later querying easier. For instance, if you are collecting system logs from five servers, of which two are HTTP servers, two are file servers, and one is an email server, you may chose to use the following tags: @@ -48,11 +48,12 @@ When an *ingester* connects to an indexer, it sends a list of tag names it inten ## Global Configuration Parameters -Most of the core ingesters support a common set of global configuration parameters. The shared Global configuration parameters are implemented using the [ingest config](https://godoc.org/github.com/gravwell/ingest/config#IngestConfig) package. Global configuration parameters should be specified in the Global section of each Gravwell ingester config file. The following Global ingester paramters are available: +Most of the core ingesters support a common set of global configuration parameters. The shared Global configuration parameters are implemented using the [ingest config](https://godoc.org/github.com/gravwell/ingest/config#IngestConfig) package. Global configuration parameters should be specified in the Global section of each Gravwell ingester config file. The following Global ingester parameters are available: * Ingest-Secret * Connection-Timeout * Rate-Limit +* Enable-Compression * Insecure-Skip-TLS-Verify * Cleartext-Backend-Target * Encrypted-Backend-Target @@ -72,7 +73,7 @@ The Ingest-Secret parameter specifies the token to be used for ingest authentica ### Connection-Timeout -The Connection-Timeout parameter specifies how long we want to wait to connect to an indexer before giving up. An empty timeout means that the ingester will wait forever to start. Timeouts should be specified in durations of minutes, seconds, or hours. +The Connection-Timeout parameter specifies how long we want to wait to connect to an indexer before giving up. An empty timeout means that the ingester will wait forever to start. Timeouts should be specified in duration of minutes, seconds, or hours. #### Examples ``` @@ -112,6 +113,20 @@ Rate-Limit=2048Kbps Rate-Limit=3MBps ``` +### Enable-Compression + +The ingest system supports a transparent compression system that will compress data as it flows between ingesters and indexers. This transparent compression is extremely fast and can help reduce load on slower links. Each ingester can request a compressed uplink for all connections by setting the `Enable-Compression` parameter to `true` in the global configuration block. + +The compression system is opportunistic in that the ingester requests compression but the upstream link gets the final say on whether compression is enabled; if the upstream endpoint does not support compression or has been configured to disallow it the link will not be compressed. + +Compression will increase the CPU and memory requirements of an ingester, if the ingester is running on an endpoint with minimal CPU and/or memory compression may reduce throughput. Compression is best suited for WAN connections, enabling compression on a Unix named pipe just incurs CPU and memory overhead with no added benefit. + +#### Example + +``` +Enable-Compression=true +``` + ### Cleartext-Backend-Target Cleartext-Backend-Target specifies the host and port of a Gravwell indexer. The ingester will connect to the indexer using a cleartext TCP connection. If no port is specified the default port 4023 is used. Cleartext connections support both IPv6 and IPv4 destinations. **Multiple Cleartext-Backend-Targets can be specified to load balance an ingester across multiple indexers.** @@ -1852,6 +1867,67 @@ precendence and evaluate left-to-right. Parens can also be used to group. **Note**: This section sourced from [Google Stenographer](https://github.com/google/stenographer/blob/master/README.md) +## IPMI Ingester + +The IPMI Ingester collects Sensor Data Record (SDR) and System Event Log (SEL) records from any number of IPMI devices. + +The configuration file provides a simple host/port, username, and password field for connecting to each IPMI device. SEL and SDR records are ingested in a JSON-encoded schema. For example: + +``` +{ + "Type": "SDR", + "Target": "10.10.10.10:623", + "Data": { + "+3.3VSB": { + "Type": "Voltage", + "Reading": "3.26", + "Units": "Volts", + "Status": "ok" + }, + "+5VSB": {...}, + "12V": {...} + } +} + +{ + "Target": "10.10.10.10:623", + "Type": "SEL", + "Data": { + "RecordID": 25, + "RecordType": 2, + "Timestamp": { + "Value": 1506550240 + }, + "GeneratorID": 32, + "EvMRev": 4, + "SensorType": 5, + "SensorNumber": 81, + "EventType": 111, + "EventDir": 0, + "EventData1": 240, + "EventData2": 255, + "EventData3": 255 + } +} +``` + +### Configuration Options ### + +IPMI uses the default set of Global configuration options. Individual IPMI devices are configured with an "IPMI" stanza. For example: + +``` +[IPMI "Server 1"] + Target="127.0.0.1:623" + Username="user" + Password="pass" + Tag-Name=ipmi + Source-Override="DEAD::BEEF" +``` + +The IPMI stanza is simple, only taking a Target (the IP:PORT of the IPMI device), username, password, and tag. Optionally, you can set a source override to force the SRC field on all ingested entries to another IP. By default, the SRC field is set to the IP of the IPMI device. + +Additionally, all IPMI stanzas can use the "Preprocessor" options, as described [here](https://docs.gravwell.io/#!ingesters/preprocessors/preprocessors.md). + ## The Gravwell Federator The Federator is an entry relay: ingesters connect to the Federator and send it entries, then the Federator passes those entries to an indexer. The Federator can act as a trust boundary, securely relaying entries across network segments without exposing ingest secrets or allowing untrusted nodes to send data for disallowed tags. The Federator upstream connections are configured like any other ingester, allowing multiplexing, local caching, encryption, etc. diff --git a/quickstart/downloads.md b/quickstart/downloads.md index eb4a397d..00f9b722 100644 --- a/quickstart/downloads.md +++ b/quickstart/downloads.md @@ -6,7 +6,7 @@ Attention: The debian repository is more easily maintained than these standalone The Gravwell core installer contains the indexer and webserver frontend. You'll need a license; either get a Community Edition free license, or contact info@gravwell.io for commercial options. -[Download Gravwell Core Installer](https://update.gravwell.io/archive/4.1.4/installers/gravwell_4.1.4.sh) (SHA256: b23aebd5098a1010d6c29b5dcf5cfa73209326cf37a2f263c2cf88baa653f4be) +[Download Gravwell Core Installer](https://update.gravwell.io/archive/4.1.5/installers/gravwell_4.1.5.sh) (SHA256: 2a5805ebe8dbe3b0be132fc9952380360060b92128ded9534aa3c1a99394f2f9) ## Ingesters @@ -15,22 +15,23 @@ The core suite of ingesters are available for download as an installable package ### Current Ingester Releases | Ingester | Description | SHA256 | More Info | |:--------:|-------------|:------:|----------:| -| [Simple Relay](#!ingesters/ingesters.md#Simple_Relay) | An ingester capable of accepting syslog or line brokend data sent over the network. |574dfbc294b5c7c149144923ab3b03bff263732924185e84a5fbf33349b4ce3f| [Download](https://update.gravwell.io/archive/4.1.4/installers/gravwell_simple_relay_installer_4.1.4.sh)| -| [File Follower](#!ingesters/ingesters.md#File_Follower) | The standard file following ingester designed to look for line broken log entries in files. Useful for ingesting logs from systems that can only log to files. |ec10d847a3fd251d514e65e6b9a4e40d830987f1b0be0f064823b8b3c3e08da9| [Download](https://update.gravwell.io/archive/4.1.4/installers/gravwell_file_follow_installer_4.1.4.sh) | -| [HTTP Ingester](#!ingesters/ingesters.md#HTTP_POST) | The HTTP ingester allows for hosting a simple webserver that takes HTTP requests in as events. SOHO and IOT devices often support webhook functionality which the HTTP ingester is perfectly suited to support. |3dd620f21563f497c18f54263d950e119f1e513b4ec8b685f15110337bd3f333| [Download](https://update.gravwell.io/archive/4.1.4/installers/gravwell_http_ingester_installer_4.1.4.sh) | -| [Netflow Capture](#!ingesters/ingesters.md#Netflow_Ingester) | The Netflow Capture ingester acts as a Netflow v5, v9, and ipfix collector, ingesting Netflow records as Gravwell entries. |ab2887d22c662ad62a50ba77917e3b39c6b3fecd083e7aa86400c9a5ae81f9ca| [Download](http://update.gravwell.io/archive/4.1.4/installers/gravwell_netflow_capture_installer_4.1.4.sh) | -| [Network Capture](#!ingesters/ingesters.md#Network_Ingester) | The Network Capture ingester is a passive network sniffing ingester which can bind to multiple network taps and send raw network traffic to Gravwell. |7b33c0414d66506fff7458b00d175b3a732348a3a6a03cf76515bc61c150f4d7| [Download](https://update.gravwell.io/archive/4.1.4/installers/gravwell_network_capture_installer_4.1.4.sh) | -| [Collectd Collector](#!ingesters/ingesters.md#collectd) | The collectd ingester acts as a standalone collectd collector. |fa2f52e79a4981383842c1edb7c91a9bd085866795c7f604cd22ed61dcd1e942| [Download](https://update.gravwell.io/archive/4.1.4/installers/gravwell_collectd_installer_4.1.4.sh) | -| [Ingest Federator](#!ingesters/ingesters.md#Federator_Ingester) | The Federator ingester is designed to aggregate multiple downstream ingesters and relay entries to upstream ingestion points. The Federator is useful for crossing trust boundaries, aggregating entry flows, and insulating Gravwell indexers from potentially untrusted downstream entry generators. |aee3c81d0901bc5b2b48f64d98ab36cd0f4f0b40f4fa7d0633becbbe3d455056| [Download](https://update.gravwell.io/archive/4.1.4/installers/gravwell_federator_installer_4.1.4.sh) | -| [Windows Events](#!ingesters/ingesters.md#Windows_Event_Service) | The Winevent ingester uses the Windows events subsystem to acquire windows events and ship them to gravwell. The Winevent ingester can be placed on a single Windows machine acting as a log collector, or on multiple endpoints. |57489db05d48cd50dd5f9a370612ffc0ae1c12182f3c548e6ffd32ca254205b5| [Download](https://update.gravwell.io/archive/4.1.4/installers/gravwell_win_events_4.1.4.msi) | -| [Windows File Follower](#!ingesters/ingesters.md#File_Follower) | The Windows file follower is identical to the File Follower ingester, but for Windows. |4bdf85c5a8060196f468571645680152f8d50115317c553be3193e773e9f463a| [Download](https://update.gravwell.io/archive/4.1.4/installers/gravwell_file_follow_4.1.4.msi) | -| [Apache Kafka](#!ingesters/ingesters.md#Kafka) | The Apache Kafka ingester can attach to one or many Kafka clusters and read topics. It can simplify massive deployments. |a8ae2f698e51153c977512ad890061dcc9800e8a9c1d49be4477821217434768| [Download](https://update.gravwell.io/archive/4.1.4/installers/gravwell_kafka_installer_4.1.4.sh)| -| [Amazon Kinesis](#!ingesters/ingesters.md#Kinesis_Ingester) | The Amazon Web Services Kinesis ingester can attach to the Kinesis stream and dramatically simplify logging a cloud deployment |32f003bc18a5890f652a3226c3dde3ee24debaaa0dcbbcba69559c344074f16d| [Download](https://update.gravwell.io/archive/4.1.4/installers/gravwell_kinesis_ingest_installer_4.1.4.sh)| -| [Google PubSub](#!ingesters/ingesters.md#GCP_PubSub) | The Google Cloud Platform PubSub Ingester can subscribe to exhausts on the GCP PubSub system, easing integration with GCP. |b4240f1a7305b34678a1d0e6f00f6218fc519a3c90df5c73cc7cb00cc7f64a81| [Download](https://update.gravwell.io/archive/4.1.4/installers/gravwell_pubsub_ingest_installer_4.1.4.sh)| -| [Office 365 Logs](#!ingesters/ingesters.md#Office_365_Log_Ingester) | The Office 365 log ingester can fetch log events from Microsoft Office 365. |50860ae54416bec1dab8dd09c47eecaec329d143e3f769ea33fecc2a17c2020a| [Download](https://update.gravwell.io/archive/4.1.4/installers/gravwell_o365_installer_4.1.4.sh)| -| [Microsoft Graph API](#!ingesters/ingesters.md#Microsoft_Graph_API_Ingester) | The MS Graph API ingester can fetch security information from the Microsoft Graph API. |79905c31e24861f32e2aad06054b3c3a7c90e4547ca9634f5cf39bc1ac82333b| [Download](https://update.gravwell.io/archive/4.1.4/installers/gravwell_msgraph_installer_4.1.4.sh)| - -[//]: <> (| [](#!ingesters/ingesters.md#) | | | [Download](https://update.gravwell.io/archive/4.1.4/installers/) |) +| [Simple Relay](#!ingesters/ingesters.md#Simple_Relay) | An ingester capable of accepting syslog or line brokend data sent over the network. |576b7795ef28889399ec4a011850033abb9d6f5452f1a199d60794feffe5211e| [Download](https://update.gravwell.io/archive/4.1.5/installers/gravwell_simple_relay_installer_4.1.5.sh)| +| [File Follower](#!ingesters/ingesters.md#File_Follower) | The standard file following ingester designed to look for line broken log entries in files. Useful for ingesting logs from systems that can only log to files. |d6dc45038bb7f39306c0a4d6b54617277ff42558586810d53c0f0fa0c56fe4dc| [Download](https://update.gravwell.io/archive/4.1.5/installers/gravwell_file_follow_installer_4.1.5.sh) | +| [HTTP Ingester](#!ingesters/ingesters.md#HTTP_POST) | The HTTP ingester allows for hosting a simple webserver that takes HTTP requests in as events. SOHO and IOT devices often support webhook functionality which the HTTP ingester is perfectly suited to support. |dd0df3ca1dd7dcca07f7e642f65c5150ee0a2f0cb816a5fab6eb4f0643b63614| [Download](https://update.gravwell.io/archive/4.1.5/installers/gravwell_http_ingester_installer_4.1.5.sh) | +| [IPMI Ingester](#!ingesters/ingesters.md#IPMI_Ingester) | Collect SDR and SEL records from IPMI endpoints. |9d8de23c0e7ca8358533a0da1d6192a22ef5352ca71b6569a81eb9ce356cc518| [Download](https://update.gravwell.io/archive/4.1.5/installers/gravwell_ipmi_installer_4.1.5.sh)| +| [Netflow Capture](#!ingesters/ingesters.md#Netflow_Ingester) | The Netflow Capture ingester acts as a Netflow v5, v9, and ipfix collector, ingesting Netflow records as Gravwell entries. |eb2b904ee507795d1cfefb194246b7197390a7747d1bf7b82fcd13f32a8faa59| [Download](http://update.gravwell.io/archive/4.1.5/installers/gravwell_netflow_capture_installer_4.1.5.sh) | +| [Network Capture](#!ingesters/ingesters.md#Network_Ingester) | The Network Capture ingester is a passive network sniffing ingester which can bind to multiple network taps and send raw network traffic to Gravwell. |015c83b6a3550032e5106838af1b12526dec6ccd201ea39c40cfd6643a48c3a6| [Download](https://update.gravwell.io/archive/4.1.5/installers/gravwell_network_capture_installer_4.1.5.sh) | +| [Collectd Collector](#!ingesters/ingesters.md#collectd) | The collectd ingester acts as a standalone collectd collector. |c7810ee03abddac4a3d4c2fe1316e50f60599f42776744dbc8f0655af4bc9229| [Download](https://update.gravwell.io/archive/4.1.5/installers/gravwell_collectd_installer_4.1.5.sh) | +| [Ingest Federator](#!ingesters/ingesters.md#Federator_Ingester) | The Federator ingester is designed to aggregate multiple downstream ingesters and relay entries to upstream ingestion points. The Federator is useful for crossing trust boundaries, aggregating entry flows, and insulating Gravwell indexers from potentially untrusted downstream entry generators. |a52cecececd34efe26ad4e5e4221502f1822a39eea4dd0666937c74b5eb58c57| [Download](https://update.gravwell.io/archive/4.1.5/installers/gravwell_federator_installer_4.1.5.sh) | +| [Windows Events](#!ingesters/ingesters.md#Windows_Event_Service) | The Winevent ingester uses the Windows events subsystem to acquire windows events and ship them to gravwell. The Winevent ingester can be placed on a single Windows machine acting as a log collector, or on multiple endpoints. |315d2eaf5e7ef3f39d18dfbcc35970b0f6758774693d2fe76a5adc5fea6cee1c| [Download](https://update.gravwell.io/archive/4.1.5/installers/gravwell_win_events_4.1.5.msi) | +| [Windows File Follower](#!ingesters/ingesters.md#File_Follower) | The Windows file follower is identical to the File Follower ingester, but for Windows. |f6cba02481708f75c1b9fcd3d3f0c259ac97906338d55439bf1a3a96a5cf52a9| [Download](https://update.gravwell.io/archive/4.1.5/installers/gravwell_file_follow_4.1.5.msi) | +| [Apache Kafka](#!ingesters/ingesters.md#Kafka) | The Apache Kafka ingester can attach to one or many Kafka clusters and read topics. It can simplify massive deployments. |0847178530b85bd6dbb79b8e69ec0fa50b819ba3ef23a21b163a6d5e3e31f799| [Download](https://update.gravwell.io/archive/4.1.5/installers/gravwell_kafka_installer_4.1.5.sh)| +| [Amazon Kinesis](#!ingesters/ingesters.md#Kinesis_Ingester) | The Amazon Web Services Kinesis ingester can attach to the Kinesis stream and dramatically simplify logging a cloud deployment |a313699f25ef248bbab8935c8f631141358ac7f88b11ae518fb44af46a6d012f| [Download](https://update.gravwell.io/archive/4.1.5/installers/gravwell_kinesis_ingest_installer_4.1.5.sh)| +| [Google PubSub](#!ingesters/ingesters.md#GCP_PubSub) | The Google Cloud Platform PubSub Ingester can subscribe to exhausts on the GCP PubSub system, easing integration with GCP. |6bdc84e5a750ffaf5cdb3a12d9831cb4baf026e8be4eae397d06dfa39afb0aa4| [Download](https://update.gravwell.io/archive/4.1.5/installers/gravwell_pubsub_ingest_installer_4.1.5.sh)| +| [Office 365 Logs](#!ingesters/ingesters.md#Office_365_Log_Ingester) | The Office 365 log ingester can fetch log events from Microsoft Office 365. |b3088ba6c04cf50779a8ea054ed8cd5d400abecb0493a5c8443fc5b5370efb16| [Download](https://update.gravwell.io/archive/4.1.5/installers/gravwell_o365_installer_4.1.5.sh)| +| [Microsoft Graph API](#!ingesters/ingesters.md#Microsoft_Graph_API_Ingester) | The MS Graph API ingester can fetch security information from the Microsoft Graph API. |d5483e4a1dc78beba78c48621b63d4d74ba914f4c45721a7edfeece4a4a43ad4| [Download](https://update.gravwell.io/archive/4.1.5/installers/gravwell_msgraph_installer_4.1.5.sh)| + +[//]: <> (| [](#!ingesters/ingesters.md#) | | | [Download](https://update.gravwell.io/archive/4.1.5/installers/) |) ## Other downloads @@ -38,6 +39,6 @@ Some Gravwell components are distributed as optional additional installers, such | Component | Description | SHA256 | More Info | |:---------:|-------------|:------:|----------:| -| [Datastore](#!distributed/frontend.md) | The datastore keeps multiple Gravwell webservers in sync, enabling load balancing |ae25d83f6704ff011b72e0138507bcb76e9efab4ded3efea804298b9b25d18e2| [Download](https://update.gravwell.io/archive/4.1.4/installers/gravwell_datastore_installer_4.1.4.sh) | -| [Offline Replicator](#!configuration/replication.md) | The offline replication server acts as a standalone replication peer, it will not participate in queries and is best paired with single indexer Gravwell installations |d6d46e1cbe5a4a64dd52910ae27e65c71e68f1aafb9303b404df161ee765ddaa| [Download](https://update.gravwell.io/archive/4.1.4/installers/gravwell_offline_replication_installer_4.1.4.sh) | -| Load Balancer | The load balancer provides a Gravwell-specific HTTP load balancing solution for fronting multiple Gravwell webservers. It connects to the datastore in order to get a list of Gravwell webservers. |aeda5ba995ecba0c4596a010ba44103573aea41898ceb032f13c190d6a85bcb3| [Download](https://update.gravwell.io/archive/4.1.4/installers/gravwell_loadbalancer_installer_4.1.4.sh) | +| [Datastore](#!distributed/frontend.md) | The datastore keeps multiple Gravwell webservers in sync, enabling load balancing |458dcb585e092e396974a40e9c0c743328686cfe69c5f6f9ac01ac6c24d0af1d| [Download](https://update.gravwell.io/archive/4.1.5/installers/gravwell_datastore_installer_4.1.5.sh) | +| [Offline Replicator](#!configuration/replication.md) | The offline replication server acts as a standalone replication peer, it will not participate in queries and is best paired with single indexer Gravwell installations |7615a6434731b3bf4b92db3c370db3dff2ec10caeed14acf6582f2f2383cb797| [Download](https://update.gravwell.io/archive/4.1.5/installers/gravwell_offline_replication_installer_4.1.5.sh) | +| Load Balancer | The load balancer provides a Gravwell-specific HTTP load balancing solution for fronting multiple Gravwell webservers. It connects to the datastore in order to get a list of Gravwell webservers. |5bc3f949e0dc1963a883db5bdaf9a07139b4b1868e7e8343d86a5027e6430c52| [Download](https://update.gravwell.io/archive/4.1.5/installers/gravwell_loadbalancer_installer_4.1.5.sh) | diff --git a/quickstart/quickstart.md b/quickstart/quickstart.md index 452465b0..edb7c4b2 100644 --- a/quickstart/quickstart.md +++ b/quickstart/quickstart.md @@ -103,6 +103,7 @@ gravwell-datastore - Gravwell datastore service gravwell-federator - Gravwell ingest federator gravwell-file-follow - Gravwell file follow ingester gravwell-http-ingester - Gravwell HTTP ingester +gravwell-ipmi - Gravwell IPMI ingester gravwell-kafka - Gravwell Kafka ingester gravwell-kafka-federator - Gravwell Kafka federator gravwell-kinesis - Gravwell Kinesis ingester diff --git a/search/processingmodules.md b/search/processingmodules.md index 202cfb07..cad4dc1e 100644 --- a/search/processingmodules.md +++ b/search/processingmodules.md @@ -69,6 +69,7 @@ These can be used just like user-defined enumerated values, thus `table foo bar * [sum](math/math.md#Sum) * [taint](taint/taint.md) * [time](time/time.md) +* [transaction](transaction/transaction.md) * [unique](math/math.md#Unique) * [upper](upperlower/upperlower.md) * [variance](math/math.md#Variance) diff --git a/search/transaction/transaction.md b/search/transaction/transaction.md new file mode 100644 index 00000000..366fc44f --- /dev/null +++ b/search/transaction/transaction.md @@ -0,0 +1,77 @@ +## Transaction + +NOTE: The `transaction` module can consume a large amount of memory. Use caution when using this module on memory constrained systems. + +The `transaction` module transforms and groups entries in the pipeline into single-entry "transactions" - groupings of entries - based on any number of keys. It is a powerful tool for capturing the activity of a given user, IP, etc., across multiple entries in a datastream. + +### Supported Options + +* `-e`: The `-e` option operates on an enumerated value instead of on the entire record. Multiple EVs are supported by providing additional `-e` flags. +* `-rsep`: The `-rsep` option sets the string to insert between transaction records. The default is "\n". +* `-fsep`: The `-fsep` option sets the string to insert between enumerated values within a given record. The default is " ". +* `-o`: The `-o` option sets the output EV to produce. The default is "transaction". +* `-c`: The `-c` option enables a count of the number of entries that make up a given transaction in the provided name. The default is "count". +* `-maxsize`: The `-maxsize` flag sets the maximum size, in kilobytes, of a given transaction before it is evicted from the tracking table (see "Memory considerations" below). The default is 500kb. +* `-maxstate`: The `-maxstate` flag sets the maximum number of transactions to track. Once exceeded, the oldest transaction will be evicted (see "Memory considerations" below). The default is 200. + +All flags are optional. + +### Overview + +The `transaction` module groups entries into single entries based on a provided set of keys. For example, given a dataset with enumerated values "host", "message", and "action", the query: + +``` +tag=data kv host action message | transaction -fsep " -- " host | table +``` + +Will collapse all entries with the same value for the EV "host" into a single entry. By default, `transaction` will group all EVs that are *not* part of the key into the output. In the example above, the EVs "host" and "message" will be grouped, using `-fsep` as a separator, and all entries that match this key will be further grouped by `-rsep`. To illustrate the example above, given the following entries: + +``` +Entry 1: host="foo" message="Host foo login" action="login" +Entry 2: host="foo" message="Host foo delete file X" action="delete" +Entry 3: host="bar" message="Host bar login" action="login" +Entry 4: host="foo" message="Host foo logout" action="logout" +``` + +Will be collapsed into two entries, one for "foo", and another for "bar": + +``` +Entry 1: transaction="login -- Host foo login + delete -- Host foo delete file X + logout -- Host foo logout" +Entry 2: transaction="login -- Host bar login" +``` + +To specify exactly which EVs to group, you can use one or more `-e` flags in the query. EVs will be grouped in the order provided. For example: + +``` +tag=data kv host action message user group | transaction -e action -e message host | table +``` + +Will only group EVs "action" and "message", ignoring "user" and "group". + +Multiple keys can be provided, and records will be created based on the grouping of all provided keys. For example: + +``` +tag=data kv host action message user group | transaction host action user | table +``` + +Will group records with the same host, action, and user. + +### Memory considerations + +The `transaction` module must buffer all entries in the datastream in order to create transactions. For queries that produce large amounts of data, this can quickly exhaust the available memory on a system. In order to prevent this, the `transaction` module provides two flags, `-maxsize`, and `-maxstate`, to control how much and how long to retain data before passing it downstream in the pipeline. + +When running, the `transaction` module keeps a table of records, with one record for every unique set of provided keys. When an entry matches the provided keys, it is added to other entries with the same match in the table (or creates a new record if it's the first one encountered). Two checks are asserted every time an entry is added to the table: + +* If the size of a given record exceeds the `-maxsize` argument, the record is immediately "evicted" - meaning it is sent down the query pipeline and is removed from the table. +* If the number of records exceeds the `-maxstate` argument, the _least recently updated_ record is evicted. + +If a record is evicted, and later an entry with a key matching that of the evicted record is encountered, a new record is created. If you notice "fragmentation" in your output, check the `-maxsize` and `-maxstate` flags. + +Because the `transaction` module can easily exhaust all available memory on your Gravwell system, follow these general guidelines when writing queries with `transaction`: + +* Put the `transaction` module as late in the query as possible. +* Work on the smallest time window possible for your query. +* Start with small `-maxsize` and `-maxstate` values, and increase only if needed. +* Instead of grouping all enumerated values, only group those of concern for your query by explicitly naming them with `-e`.