diff --git a/3rdparty-frontend-licenses.txt b/3rdparty-frontend-licenses.txt index da503b19..58cf753c 100644 --- a/3rdparty-frontend-licenses.txt +++ b/3rdparty-frontend-licenses.txt @@ -2892,32 +2892,6 @@ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -has -MIT -Copyright (c) 2013 Thiago de Arruda - -Permission is hereby granted, free of charge, to any person -obtaining a copy of this software and associated documentation -files (the "Software"), to deal in the Software without -restriction, including without limitation the rights to use, -copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the -Software is furnished to do so, subject to the following -conditions: - -The above copyright notice and this permission notice shall be -included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES -OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT -HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, -WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING -FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR -OTHER DEALINGS IN THE SOFTWARE. - - has-bigints MIT MIT License @@ -3540,32 +3514,6 @@ SOFTWARE. -is-typed-array -MIT -The MIT License (MIT) - -Copyright (c) 2015 Jordan Harband - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. - - - is-weakmap MIT MIT License @@ -4325,6 +4273,31 @@ IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. +possible-typed-array-names +MIT +MIT License + +Copyright (c) 2024 Jordan Harband + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. + + regenerator-runtime MIT diff --git a/_static/versions.json b/_static/versions.json index 311993a8..72105bf1 100644 --- a/_static/versions.json +++ b/_static/versions.json @@ -1,10 +1,14 @@ [ { - "name": "v5.5.7 (latest)", - "version": "v5.5.7", + "name": "v5.6.0 (latest)", + "version": "v5.6.0", "url": "/", "preferred": true }, + { + "version": "v5.5.7", + "url": "/v5.5.7/" + }, { "version": "v5.5.6", "url": "/v5.5.6/" diff --git a/architecture/architecture.md b/architecture/architecture.md index 8a6dc66e..5c6930a7 100644 --- a/architecture/architecture.md +++ b/architecture/architecture.md @@ -8,6 +8,7 @@ maxdepth: 1 caption: Architecture hidden: true --- +Filesystems Networking Considerations for Gravwell Gravwell Clusters Distributed Gravwell Webserver @@ -100,9 +101,9 @@ The core ingest mechanic requires only three data items: a byte array, timestamp ## Compatibility -### Minimum Software Requirements +### Minimum Requirements -Gravwell runs on most common Linux distributions which support [SystemD](https://en.wikipedia.org/wiki/Systemd). A minimum Linux kernel version of 3.2 and a 64bit X86 architecture are required. +Gravwell runs on most common Linux distributions which support [SystemD](https://en.wikipedia.org/wiki/Systemd). A minimum Linux kernel version of 3.2, 2GB of RAM, and a 64bit X86 architecture is required. ### Version Locking diff --git a/cbac/cbac.md b/cbac/cbac.md index f172b7d4..18a0fc4b 100644 --- a/cbac/cbac.md +++ b/cbac/cbac.md @@ -4,6 +4,7 @@ Capability Based Access Control (CBAC) is a feature access system that enables u CBAC is based around a deny-all default policy. Capabilities and tag access must be granted to each user (or group a user belongs to) in order to access those features. Admin users are not restricted by CBAC and always have full system access. +(enabling-cbac)= ## Enabling CBAC CBAC is enabled by adding the following clause to the global section of the webserver's `gravwell.conf` and restarting the webserver: diff --git a/changelog/5.6.0.md b/changelog/5.6.0.md new file mode 100644 index 00000000..3750ad96 --- /dev/null +++ b/changelog/5.6.0.md @@ -0,0 +1,54 @@ +# Changelog for version 5.6.0 + +## Released 15 October 2024 + +## Gravwell + +### Additions + +* Added the Free and CE Advanced license tiers. +* Added the ability to download installed Kits. +* Added the [Attach flow node](/flows/nodes/attach). +* Added support for single and double quotes in Data field extractions in winlog. +* Added the ability to share results from scheduled searches and alerts globally or with multiple groups. +* Added `-maxtracked` and `-maxsize` flags to the `fuse` module. +* Added maps to persistent variables in the `eval` module. +* Added acceleration hints to the `intrinsic` module. +* Added src acceleration hints to the `eval` module. +* Added additional error handling to searches. +* Added support for an ERROR state on the Persistent Searches page. + +### Bug Fixes + +* Improved Renderer Storage Limit notifications. +* Improved recovery for searches resulting in errors. +* Improved search agent detection of searches which hit an error during a query. +* Improved sharing options for the Persistent Searches pages. +* Improved ageout to prevent hot aging to cold when cold data storage is over its threshold. +* Improved overview chart colors to better reflect the search status for default, warn, and error. +* Fixed an edge case on the Scheduled Search API to improve compliance with OpenAPI spec. +* Fixed an issue where overview stats could be incomplete when the Renderer Storage Limit was reached due to partial results returned. +* Fixed an issue where SSO logins would fail when a token cookie gets too big (e.g. when the groups list is long). +* Fixed an issue where a validation error could be shown on a Dispatcher owned by another user when changing an Alert schema. +* Fixed an issue where a duplicate warning would be incorrectly shown when saving your first query. +* Fixed an issue where uploading an invalid Flow would not display an error message. +* Fixed an issue where a custom label added to a Flow node could be reset by changing focus. +* Fixed an issue where a configuration Macro name would not be saved on Kit download. +* Fixed an issue where Scripts were not properly displayed in the Kit Content List when deploying. +* Fixed an issue where the cursor would jump to the end when trying to add characters to the beginning or middle of a Macro name. +* Fixed an issue where the Last Run time would not be updated without refreshing for Scheduled Searches and Scripts. +* Fixed an issue where the `Scheduled` value for Flows was incorrectly populated with the executed time instead of the scheduled time. +* Fixed an issue where the text renderer did not show intrinsic EVs without using the `intrinsic` module. +* Fixed an issue where acceleration was not working with the `src` module. +* Fixed an issue where `lookup` module could not read a CSV smaller than 8 bytes. +* Fixed an issue with resource name resolution for queries run as admin. +* Fixed an issue where a timeframe lock would be lost after two consecutive launches in Query Studio. +* Fixed an issue where enabling live search would cause the 'Fetching data...' message to be displayed until the next update. +* Fixed permissions in shell installers to ensure all files are owned by gravwell:gravwell instead of root. +* Sorted EVs in the Query Studio Fields tab to prevent them from rearranging. + +## Ingester Changes + +### Bug Fixes + +* Fixed a bug in the syslog ingester preprocessor that would crash given certain malformed input. diff --git a/changelog/list.md b/changelog/list.md index 2f3512ed..7bf573c9 100644 --- a/changelog/list.md +++ b/changelog/list.md @@ -7,7 +7,7 @@ maxdepth: 1 caption: Current Release --- -5.5.7 <5.5.7> +5.6.0 <5.6.0> ``` ## Previous Versions @@ -18,6 +18,7 @@ maxdepth: 1 caption: Previous Releases --- +5.5.7 <5.5.7> 5.5.6 <5.5.6> 5.5.5 <5.5.5> 5.5.4 <5.5.4> diff --git a/conf.py b/conf.py index 9d14d162..8b92a887 100644 --- a/conf.py +++ b/conf.py @@ -21,7 +21,7 @@ project = "Gravwell" copyright = f"Gravwell, Inc. {date.today().year}" author = "Gravwell, Inc." -release = "v5.5.7" +release = "v5.6.0" # Default to localhost:8000, so the version switcher looks OK on livehtml version_list_url = os.environ.get( @@ -56,6 +56,7 @@ ".DS_Store", "env", "_vendor", + "_tools", ] language = "en" diff --git a/configuration/configuration.md b/configuration/configuration.md index d24708fb..5622d007 100644 --- a/configuration/configuration.md +++ b/configuration/configuration.md @@ -69,7 +69,7 @@ By default, Gravwell does not generate TLS certificates. For instructions on set (configuration_indexer)= ## Indexer Configuration -Indexers are the storage centers of Gravwell and are responsible for storing, retrieving, and processing data. Indexers perform the first heavy lifting when executing a query, first finding the data then pushing it into the search pipeline. The search pipeline will perform as much work as possible in parallel on the indexers for efficiency. Indexers benefit from high-speed low-latency storage and as much RAM as possible. Gravwell can take advantage of file system caches, which means that as you are running multiple queries over the same data it won’t even have to go to the disks. We have seen Gravwell operate at over 5GB/s per node on well-cached data. The more memory, the more data can be cached. When searching over large pools that exceed the memory capacity of even the largest machines, high speed RAID arrays can help increase throughput. +Indexers are the storage centers of Gravwell and are responsible for storing, retrieving, and processing data. Indexers perform the first heavy lifting when executing a query, first finding the data then pushing it into the search pipeline. The search pipeline will perform as much work as possible in parallel on the indexers for efficiency. Indexers benefit from high-speed low-latency storage and as much RAM as possible. Gravwell can take advantage of filesystem caches, which means that as you are running multiple queries over the same data it won’t even have to go to the disks. We have seen Gravwell operate at over 5GB/s per node on well-cached data. The more memory, the more data can be cached. When searching over large pools that exceed the memory capacity of even the largest machines, high speed RAID arrays can help increase throughput. We recommend indexers have at least 32GB of memory with 8 CPU cores. If possible, Gravwell also recommends a very high speed NVME solid state disk that can act as a hot well, holding just a few days of of the most recent data and aging out to the slower spinning disk pools. The hot well enables very fast access to the most recent data, while enabling Gravwell to organize and consolidate older data so that he can be searched as efficiently as possible. @@ -103,6 +103,14 @@ Tag-to-well mappings are defined in the `/opt/gravwell/etc/gravwell.conf` config The well named "raw" is thus used to store data tagged "pcap" and "video", which we could reasonably assume will consume a significant amount of storage. + +(well_storage)= +#### Well Storage + +Gravwell Indexers require seekable POSIX compliant filesystems for hot and cold storage volumes. Picking the right filesystem for your well storage can open up opportunities for optimizations and fault tolerance above and beyond what Gravwell offers in the default configuration. + +See the section on [Filesystems](/configuration/filesystems) for more details on supported filesystems and filesystem options. + #### Tag Restrictions and Gotchas Tag names can only contain alpha numeric values; dashes, underscores, special characters, etc are not allowed in tag names. Tags should be simple names like "syslog" or "apache" that are easy to type and reflect the type of data in use. diff --git a/configuration/filesystems.md b/configuration/filesystems.md new file mode 100644 index 00000000..35dd614b --- /dev/null +++ b/configuration/filesystems.md @@ -0,0 +1,55 @@ +# Gravwell Indexer Supported Filesystems + +Gravwell Indexers require robust, seekable, and POSIX complaint filesystems in order to function properly. The Gravwell system makes extensive use of memory mapping, madvise calls, and filesystem specific optimizations to maximize compression ratios and query throughput. Picking a good filesystem for your deployment is critical to ensuring a manageable and fast Gravwell system. + +## Supported Filesystems + +Gravwell officially supports the following Linux filesystems. + +| Filesystem | Minimum Kernel Version | Supports Transparent Compression | +|:-----------|:-----------------------|:--------------------------------:| +| EXT4 | 3.2 | | +| XFS | 3.2 | | +| BTRFS | 5.0 | ✅ | +| ZFS | N/A | ✅ | +| NFSv4 | N/A | | + + + + +### Ext4 + +The Ext4 filesystem is well supported and an excellent default choice as a backing filesystem. Ext4 supports volume sizes up to 1EiB and up to 4 Billion files, Gravwell extensively tests on Ext4. + +### XFS + +The XFS filesystem is extremely fast, well tested, and praised by kernel developers. XFS supports a wide array of configuration options to optimize the filesystem for a specific storage device topology. + +### BTRFS + +The BTRFS filesystem has been a core part of the Linux kernel for over a decade, but due to some rocky starts and conservative warnings about stability early on in its life cycle it gets a bad rap. Gravwell extensively tests the BTRFS filesystem in a transparent compression topology and has found it to be exceedingly fast, memory efficient, and well supported. While BTRFS is supported all the way back to Linux Kernel 3.2, 5.X and newer kernels contain a highly optimized and stable code base. Gravwell recommends BTRFS with ZSTD compression for a hot store when transparent compression is enabled and users want the best performance. + +### ZFS + +The ZFS filesystem has long been praised as **THE** next generation filesystem. It has a stable, well-maintained code base with robust documentation. However, ZFS is in a bit of a strange situation in the Linux kernel in that many distributions do not natively support it and the kernel maintainers believe it has an incompatible license. ZFS also employs its own caching strategy that is not well blended with the Linux page cache; this means you need to manually tune the ZFS ARC cache and be aware that the ARC cache competes for memory with the Gravwell processes. When memory gets tight, ZFS will not free memory in the same way that BTRFS may. That being said, the additional configuration options available in ZFS make it a good choice for cold storage when compression ratios are of the utmost importance. + +Gravwell recommends ZFS when transparent compression is desired for a cold storage tier and compression efficiency is more important than raw speed. Setting the block size to 1MB and the compression system to zstd-10 can yield impressive compression ratios that still perform well. ZFS however is significantly slower than BTRFS when using transparent compression and a fast storage device. ZFS also does not support the ability to disable copy-on-write and compression on a per file basis, so ZFS will attempt to compress and fragment highly orthogonal data structures like well indexes. + +### NFSv4 + +Some customers desire storage arrays to be fully remote with dedicated storage appliances doing the dirty work of data management. Gravwell tentatively supports NFSv4 with a few caveats. The filesystem must be configured with all supporting daemons and mount options such that file permissions can be properly mapped to the NFS volume. While it is possible to disable user/group management on NFS entirely, this is not recommended. + +Gravwell Indexers also maintain long-lived file handles with very high I/O requirements. NFS, being a network filesystem, suffers from network interruptions, which can cause process hangs, unexpected performance drops, and increased complexity of management. Gravwell only tests on NFSv4 and generally does not recommend it. + + +## Unsupported Filesystems + +Gravwell requires full, robust POSIX compatibility. The following filesystems are not supported at all. Gravwell may still function, but we make no guarantees about performance, reliability, or correctness. + +* FAT32 +* VFAT +* NTFS +* SMB/CIFS +* FUSE mounts + +Other POSIX compliant filesystems like EXT2, EXT3, and ReiserFS are not tested. Cluster filesystems such as GlusterFS, LusterFS, and CephFS are fully POSIX compliant and customers have reported good results, however Gravwell has not done extensive testing and does not officially support them. diff --git a/configuration/parameters.md b/configuration/parameters.md index f6883c17..eb79e24b 100644 --- a/configuration/parameters.md +++ b/configuration/parameters.md @@ -312,6 +312,7 @@ Default Value: Example: `Web-Listen-Address=10.0.0.1` Description: The Web-Listen-Address parameter specifies the address the webserver should bind to and serve from. By default the parameter is empty, meaning the webserver binds to all interfaces and addresses. +(remote-indexers-conig)= ### **Remote-Indexers** Applies to: Webserver Default Value: `net:10.0.0.1:9404` diff --git a/configuration/replication.md b/configuration/replication.md index f74ebd38..b9dc62d1 100644 --- a/configuration/replication.md +++ b/configuration/replication.md @@ -1,3 +1,4 @@ +(data-replication)= # Data Replication Replication is included with all Gravwell Cluster Edition licenses, allowing for fault-tolerant high availability deployments. The Gravwell replication engine transparently manages data replication across distributed indexers with automatic failover, load balanced data distribution, and compression. Gravwell also provides fine tuned control over exactly which wells are included in replication and how the data is distributed across peers. Customers can rapidly deploy a Gravwell cluster with uniform data distribution, or design a replication scheme that can tolerate entire data center failures using region-aware peer selection. The online failover system also allows continued access to data even when some indexers are offline. @@ -113,7 +114,7 @@ Replication is controlled by the "Replication" configuration group in the gravwe | Connect-Wait-Timeout | Connect-Wait-Timeout=30 | Specifies the number of seconds an Indexer should wait when attempting to connect to replication peers during startup. | | Disable-Server | Disable-Server=true | Disable the indexer replication server, it will only act as a client. This is important when using offline replication. | | Disable-Compression | Disable-Compression=true | Disable compression on the storage for the replicated data. | -| Enable-Transparent-Compression | Enable-Transparent-Compression=true | Enable transparent compression on using the host file system for replicated data. | +| Enable-Transparent-Compression | Enable-Transparent-Compression=true | Enable transparent compression on using the host filesystem for replicated data. | | Enable-Transport-Compression | Enable-Transparent-Compression=true | Enable transport compression when transmitting data to replication peer. Defaults to `true`. | ## Disabling Replication Per Well diff --git a/distributed/overwatch.md b/distributed/overwatch.md index d1458ff6..2ddb622f 100644 --- a/distributed/overwatch.md +++ b/distributed/overwatch.md @@ -1,3 +1,4 @@ +(gravwell-overwatch)= # Gravwell Overwatch For any number of reasons it may be advantageous to run multiple separate instances of Gravwell. A Managed Security Service Provider (MSSP) might set up a Gravwell indexer+webserver instance for each of their customers for easy separation of data and simpler user management. However, while you don't want Customer A to access Customer B's data, it would be useful for the MSSP to be able to query across Customer A and Customer B simultaneously. diff --git a/flows/flows.md b/flows/flows.md index 206b71de..ca08251b 100644 --- a/flows/flows.md +++ b/flows/flows.md @@ -63,6 +63,7 @@ maxdepth: 1 hidden: true --- +Attach Background/Save query Email Flow Storage Read @@ -98,6 +99,7 @@ Throttle Update Resources ``` +* [Attach](nodes/attach): Attach to an existing Gravwell query. * [Background/Save Search](nodes/bgsave): Save or background a Gravwell query. * [Email](nodes/email): send email. * [Flow Storage Read](nodes/storageread): read items from a persistent storage. diff --git a/flows/nodes/attach-alert.png b/flows/nodes/attach-alert.png new file mode 100644 index 00000000..f5f8fad6 Binary files /dev/null and b/flows/nodes/attach-alert.png differ diff --git a/flows/nodes/attach-flow1.png b/flows/nodes/attach-flow1.png new file mode 100644 index 00000000..67a5fd9f Binary files /dev/null and b/flows/nodes/attach-flow1.png differ diff --git a/flows/nodes/attach-flow2.png b/flows/nodes/attach-flow2.png new file mode 100644 index 00000000..641d7069 Binary files /dev/null and b/flows/nodes/attach-flow2.png differ diff --git a/flows/nodes/attach-mattermost.png b/flows/nodes/attach-mattermost.png new file mode 100644 index 00000000..cdaa7642 Binary files /dev/null and b/flows/nodes/attach-mattermost.png differ diff --git a/flows/nodes/attach.md b/flows/nodes/attach.md new file mode 100644 index 00000000..24a9fb93 --- /dev/null +++ b/flows/nodes/attach.md @@ -0,0 +1,45 @@ +# Attach + +The Attach node connects to an existing Gravwell query, specified by a search ID. Like the [Run Query](runquery) node, it outputs a structure (named `search` by default) containing information about the search into the payload and allows other nodes to access the results. + +The most common use of the Attach node is to connect to the scheduled search which triggered the [Alert](/alerts/alerts) for which the current flow is a consumer. See the example below for more information on this use case. + +## Configuration + +* `Search ID`, required: the ID of an existing Gravwell query. +* `Output Variable Name`: the name to use for results in the payload, default "search". + +## Output + +The node inserts an object (named `search` by default) containing information about the search into the payload. The component fields are documented in the [Run Query node's documentation](runquery). + +## Example + +In this example, we will build a flow which, when triggered from an [alert](/alerts/alerts), attaches to the search which triggered the alert and sends the search results to a Mattermost channel. + +First, we build a scheduled search; for testing, we're running the following query: + +``` +tag=gravwell limit 3 | syslog Appname Hostname | table Appname Hostname +``` + +Next, we create an Alert and set that scheduled search as a dispatcher: + +![](attach-alert.png) + +```{note} +Observe that the alert has "Max Events" set to 1. That's because our flow will be connecting to the dispatching search and retrieving *all* results from the search, so we only want the alert to run the flow at most once per execution of the search. Setting Max Events to 1 accomplishes this. +``` + +Finally, we create a flow using the Attach node and the Mattermost node and associate it with the alert: + +![](attach-flow1.png) + +![](attach-flow2.png) + +Note how the Mattermost node is able to use `search.Results` as the message text, exactly as if we had used the [Run Query](runquery) node instead of Attach. + +The output looks like this: + +![](attach-mattermost.png) + diff --git a/license/free-tier-error.png b/license/free-tier-error.png new file mode 100644 index 00000000..da88c024 Binary files /dev/null and b/license/free-tier-error.png differ diff --git a/license/license.md b/license/license.md index 6f37b64d..baebd4e9 100644 --- a/license/license.md +++ b/license/license.md @@ -26,7 +26,7 @@ Updating a Gravwell license can be performed using the CLI or GUI without restar ## License Expiration -All Gravwell licenses have an expiration date and once a license has expired Gravwell will not start. A license expires in four discrete steps: +All Gravwell licenses except the Free edition have an expiration date and once a license has expired Gravwell will not start. A license expires in four discrete steps: 1. Warning about upcoming expiration 2. Expiration grace period @@ -39,7 +39,6 @@ Gravwell will never delete data due to license expiration, all stored data, reso Here is a handy table that explains the events leading up to and after license expiration. - | Event | Description | Time to License Expiration | |-------|-------------|:--------------------------:| | Warning 1 | A notification indicating that the license will expire in less than 30 days | T - 30 days | @@ -47,3 +46,39 @@ Here is a handy table that explains the events leading up to and after license e | Expiration | A notification indicating that the license is expired, 14 day grace period begins | T - 0 | | Ingest Disabled | Ingest is disabled and a notification indicating that the license is expired | T + 15 days | | Query Disabled | Searching is disabled and a notification indicating that the license is expired | T + 30 days | + +## Gravwell License Types + +| Type | Identifier | Basic Features | Unlimited Ingest | Cluster | Replication | CBAC | HA Webservers | SSO | Notes | +|---------------------|-------------|:--------------:|:----------------:|:-------:|:-----------:|:----:|:-------------:|:---:|:----------------------------------------------------| +| Free | free | ✅ | | | | | | | Free with 2GB/day ingest, no sign-up required, non-commercial use only, never expires. | +| Community Edition | community | ✅ | | | | | | | Free signup with 13.9 GB/day ingest, authorized for commercial use, [free licenses with instant delivery](https://www.gravwell.io/community-edition). | +| CE Advanced | community | ✅ | | | | | | | Free signup with 50 GB/day ingest, authorized for commercial use, [free license](https://www.gravwell.io/community-edition-advanced) after validation. Business email required. | +| Pro | single | ✅ | ✅ | | | | | | Single indexer, unlimited ingest, limited features. | +| Enterprise | single | ✅ | ✅ | | ✅ | ✅ | | ✅ | Single indexer, full feature set, offline replication supported. | +| Cluster | cluster | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Cluster deployment with online replication, distributed webservers, and full feature set. | +| Unlimited | unlimited | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Cluster deployment no limit on indexer count; the *go nuts* license tier. | +| Cloud | cloud | ✅ | | gravwell managed | gravwell managed | ✅ | gravwell managed | ✅ | ✅ | Gravwell managed cloud deployment, opaque infrastructure with contract defined ingest. | + + +🗸 - rate limited + +✅ - full support + +gravwell managed - Gravwell managed + +## Free Edition Feature Availability Warnings + +As illustrated above, not all features are available when using the Free edition. Gravwell will display warning messages if it detects state that is incompatible with the Free version. + +![CBAC is not compatible with Free Tier](free-tier-error.png) + +Please consult the following table for advice on resolving Free edition availability warnings, should you encounter them. + +| Message | Fix | +| -------------------------------- | --------------------------------------------------------------------------------------------------- | +| `Overwatch enabled` | Ensure you're not running an [Overwatch webserver](#gravwell-overwatch) | +| ` Remote-Indexers configured` | Ensure your `gravwell.conf` has no more than one [remote indexer configured](remote-indexers-conig) | +| `CBAC enabled` | Ensure `Enable-CBAC` is not set in your `gravwell.conf` ([CBAC docs](#enabling-cbac)) | +| `Remote Datastore enabled` | Ensure `Datastore` is not set in your `gravwell.conf` ([Datastore Server docs](#datastore_server)) | +| `Replication configured` | Ensure your `gravwell.conf` does not have a [replication section](#data-replication) | diff --git a/quickstart/quickstart.md b/quickstart/quickstart.md index a2766edd..f737d85a 100644 --- a/quickstart/quickstart.md +++ b/quickstart/quickstart.md @@ -19,7 +19,7 @@ This guide is suitable for Community Edition users as well as users with a paid You may find the [installation checklist](checklist) and the [glossary](/glossary/glossary) useful companions to this document. -If you are interested in a complete training package, please see the [complete training PDF](https://github.com/gravwell/training/releases/download/v5.5.7/gravwell_training_v5.5.7.pdf). The Gravwell training PDF is the complete training manual which is paired with labs and exercises. The exercises are built from the open source [Gravwell Training](https://github.com/gravwell/training) repository. +If you are interested in a complete training package, please see the [complete training PDF](https://github.com/gravwell/training/releases/download/v5.6.0/gravwell_training_v5.6.0.pdf). The Gravwell training PDF is the complete training manual which is paired with labs and exercises. The exercises are built from the open source [Gravwell Training](https://github.com/gravwell/training) repository. ```{note} Community Edition users will need to obtain their own license from [https://www.gravwell.io/download](https://www.gravwell.io/download) before beginning installation. Paid users should already have received a license file via email. diff --git a/search/eval/eval.md b/search/eval/eval.md index a51ef868..8c1f39da 100644 --- a/search/eval/eval.md +++ b/search/eval/eval.md @@ -86,6 +86,29 @@ This program will initialize a variable "count" to 0, and the value will persist To use a persistent variable, it must be declared with `var ;`. Optionally, you can initialize the variable to a value with the syntax `var = ;` +Persistent variables are not attached to entries like other variables. In order to use a persistent variable's value outside of eval, it must be assigned to a regular variable. + +### Persistent Maps + +Like persistent variables, eval can also create persistent maps, which behave like key/value objects. A map uses strings as keys, and can store any eval variable type except maps. Maps are declared with the `map` keyword, and accessed like other variables. To access a specific key in a map, the notation `map[key]` is used. + +For example, to count each unique Appname in a list of syslog entries, a map can be used with the syslog Appname as the key: + +``` +tag=gravwell + syslog Appname +| eval + map appnames; + if (Appname == "") + appnames[Appname] = 0; // The key doesn't exist. Create one. + else + appnames[Appname]=appnames[Appname]+1; // The key does exist. Increment. +``` + +Like persistent variables, maps are not attached to entries. Values must be assigned to regular variables in order to use them outside of eval. + +Maps have a limit of 1000000 keys. Any new key assigned to a map after this limit is reached will be discarded. + ### Keywords The following keywords are reserved and may not be used as identifiers. @@ -237,6 +260,18 @@ if ( foo == "bar" ) { } ``` +Eval also supports `else if` statements of arbitrary length. For example: + +``` +if ( foo == "bar" ) { + output = "foo is bar!"; +} else if ( foo == "baz" ) { + output = "foo is baz!"; +} else { + output = "foo is not bar!"; +} +``` + ### for statements "for" statements specify the repeated execution of a block. "for" statements use the C-style syntax of an initializer, condition, and post statement, and are contained in parentheses `( )` and separated by semicolons `;`. Code blocks are contained in braces `{ }`. For example, to iterate 10 times over a code block: @@ -789,11 +824,11 @@ The eval syntax is expressed using a [variant](https://github.com/gravwell/pbpg) ``` Program = ( "(" Expression ")" EOF ) | ( "(" Vars StatementList ")" EOF ) | ( "(" StatementList ")" EOF ) | ( "(" Assignment ")" EOF ) | ( Expression EOF ) | ( Vars StatementList EOF ) | ( StatementList EOF ) | ( Assignment EOF ) Vars = VarSpec { VarSpec } -VarSpec = "var" VarSpecAssignment { "," VarSpecAssignment } ";" +VarSpec = ( "var" VarSpecAssignment { "," VarSpecAssignment } ";" ) | ( "map" AssignmentIdentifier ";" ) VarSpecAssignment = AssignmentIdentifier [ "=" Expression ] StatementList = Statement { Statement } Statement = ( "if" "(" Expression ")" Statement "else" Statement ) | ( "if" "(" Expression ")" Statement ) | ( "for" "(" Assignment ";" Expression ";" Assignment ")" "{" StatementList "}" ) | "{" StatementList "}" | Function ";" | Assignment ";" | "return" Expression ";" | "break" ";" | "continue" ";" | ";" -Assignment = ( AssignmentIdentifier "=" Expression ) | Expression +Assignment = ( AssignmentIdentifier "[" Expression "]" "=" Expression ) | ( AssignmentIdentifier "=" Expression ) | Expression Expression = ( LogicalOrExpression "?" Expression ":" LogicalOrExpression ) | LogicalOrExpression LogicalOrExpression = LogicalAndExpression { LogicalOrOp LogicalAndExpression } LogicalAndExpression = InclusiveOrExpression { LogicalAndOp InclusiveOrExpression } @@ -807,7 +842,7 @@ AdditiveExpression = MultiplicativeExpression { AdditiveOp MultiplicativeE MultiplicativeExpression = UnaryExpression { MultiplicativeOp UnaryExpression } UnaryExpression = UnaryOp PostfixExpression | PostfixExpression PostfixExpression = PrimaryExpression [ PostfixOp ] -PrimaryExpression = NestedExpression | Identifier | Literal +PrimaryExpression = NestedExpression | ( Identifier "[" Expression "]" ) | Identifier | Literal NestedExpression = ( Function ) | ( Cast "(" Expression ")" ) | ( "(" Expression ")" ) Literal = DecimalLiteral | FloatLiteral | StringLiteral | "true" | "false" Function = FunctionName "(" [ Expression { "," Expression } ] ")" diff --git a/search/fuse/fuse.md b/search/fuse/fuse.md index 06e1b34c..2b9ee5bc 100644 --- a/search/fuse/fuse.md +++ b/search/fuse/fuse.md @@ -13,6 +13,8 @@ You can think of fuse like a simpler version of [lookup](/search/lookup/lookup), ## Supported Options +* `-maxtracked `: sets the maximum number of unique keys to track per operation. The query will abort if this value is exceeded. Defaults to 1000000. +* `-maxsize `: sets the maximum amount of memory in megabytes to hold when tracking keys. Defaults to 1024MB. * `-pushpop`: This flag enables "push-pop" mode. In this mode, fuse *always* stores the current value of the data enumerated values and *always* loads the previous values. See below for a detailed example. ## Syntax @@ -130,4 +132,4 @@ Viewing the results in a table shows that user `jfloren` logged on from two diff ![](fuse-velocity.png) -Note how on the second entry, the `previousLocation` value matches the `Location` of the *first* entry; likewise previousLocation on the third entry matches that of the second entry. This is the basic function of the push-pop mode. \ No newline at end of file +Note how on the second entry, the `previousLocation` value matches the `Location` of the *first* entry; likewise previousLocation on the third entry matches that of the second entry. This is the basic function of the push-pop mode. diff --git a/search/search.md b/search/search.md index f964a535..c24b7a2e 100644 --- a/search/search.md +++ b/search/search.md @@ -285,6 +285,10 @@ start="2006-01-02T15:04:05Z07:00" end=-1h tag=default json foo table If start/end time constraints are provided, the GUI time picker timeframe will be ignored. ``` +```{note} +The start/end constraints cannot be used in the inner query portion of compound queries. Only the main query can use these constraints. +``` + ## Comments Gravwell supports two types of comments. diff --git a/tuning/tuning.md b/tuning/tuning.md index eff57400..005010a3 100644 --- a/tuning/tuning.md +++ b/tuning/tuning.md @@ -232,7 +232,7 @@ Example: `Enable-Transparent-Compression=true` These parameters control kernel-level, transparent compression of data in the wells. If enabled, Gravwell can instruct the `btrfs` filesystem to transparently compress data. This is more efficient than user-mode compression. Setting `Enable-Transparent-Compression` true automatically turns off user-mode compression. Note that setting `Disable-Compression=true` will disable transparent compression. -Additionally, transparent compression also has performance benefits by taking advantage of memory de-duplication, if you need the best possible performance from Gravwell, combining transparent compression with a well tuned BTFS file system is the best way to achieve it. +Additionally, transparent compression also has performance benefits by taking advantage of memory de-duplication, if you need the best possible performance from Gravwell, combining transparent compression with a well tuned BTFS filesystem is the best way to achieve it. ##### Acceleration