Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: Merge master to dev #1050

Merged
merged 22 commits into from
Jul 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
1139752
Document how to configure Admin-Attribute with Keycloak
floren Jul 10, 2024
b026172
Apply 5.4.10-3 to master
david-fritz-gravwell Jul 11, 2024
032ef0b
also update other winevent links
david-fritz-gravwell Jul 11, 2024
7a05bee
Update ingesters/winevent.md
david-fritz-gravwell Jul 11, 2024
2a17775
Merge pull request #1046 from david-fritz-gravwell/5.4.10
ashnwade Jul 11, 2024
d053df1
Merge pull request #1044 from john-floren-gravwell/sso-admin
michael-wisely-gravwell Jul 12, 2024
5e9441b
Fix up Kafka ingester docs
floren Jul 12, 2024
49a904d
fix myst
ashnwade Jul 12, 2024
3b267e4
Merge pull request #1051 from ashnwade/housekeeping
ashnwade Jul 12, 2024
c4c441d
Merge pull request #1052 from ashnwade/release/v5.4.4
ashnwade Jul 12, 2024
b83583d
Merge pull request #1053 from gravwell/release/v5.4.3
michael-wisely-gravwell Jul 12, 2024
9e8274d
fix myst
ashnwade Jul 12, 2024
099c2be
Merge pull request #1054 from gravwell/release/v5.4.4
michael-wisely-gravwell Jul 12, 2024
cd3d677
Merge pull request #1055 from gravwell/release/v5.4.5
ashnwade Jul 12, 2024
cb31381
Merge pull request #1057 from gravwell/release/v5.4.6
michael-wisely-gravwell Jul 12, 2024
33d9d6a
Merge pull request #1056 from ashnwade/release/v5.4.7
michael-wisely-gravwell Jul 12, 2024
7c7177b
Merge pull request #1059 from ashnwade/release/v5.4.7
michael-wisely-gravwell Jul 12, 2024
050ec9f
Merge pull request #1058 from gravwell/release/v5.4.7
michael-wisely-gravwell Jul 12, 2024
bf7a7bd
fix myst
ashnwade Jul 12, 2024
149c52b
Merge pull request #1060 from ashnwade/release/v5.4.8
michael-wisely-gravwell Jul 12, 2024
eb24ac9
Merge pull request #1061 from gravwell/release/v5.4.8
michael-wisely-gravwell Jul 12, 2024
cd1a175
Merge pull request #1062 from gravwell/release/v5.4.9
michael-wisely-gravwell Jul 12, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added configuration/sso-keycloak/admin-attribute.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added configuration/sso-keycloak/admin-sso.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
30 changes: 30 additions & 0 deletions configuration/sso-keycloak/keycloak.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,3 +111,33 @@ systemctl restart gravwell_webserver
```

Then go to your Gravwell login screen; you should see the "Login with SSO" button underneath the regular username & password fields. Clicking this button will take you to the Keycloak login page, where you can enter credentials for a valid Keycloak user. This should log you in to Gravwell.

## Setting the Admin flag on users

With the `Admin-Attribute` configuration option, we can use Keycloak to determine which users should be Gravwell admins. Assuming we've added the following to our `[SSO]` stanza in `gravwell.conf`:

```
Admin-Attribute=isAdmin
```

We will then configure Keycloak to send an appropriate attribute named "isAdmin" with each message. First, we'll go to the "Realm settings" page, find the "User profile" tab, and click "Create attribute". We'll then populate the form as below:

![](admin-attribute.png)

We can add a Validator to make sure it is always set to "true" or "false". Select "Add Validator", then choose the "options" type and enter two options:

![](admin-attribute-options.png)

Save the validator, then save the attribute.

Next, go to the "Client scopes" page and select the Gravwell scope we defined earlier. Open the Mappers tab, then click "Add mapper" and pick "By configuration". Select the "User Attribute" configuration and populate it as follows:

![](admin-attribute-mapper.png)

Finally, set the `isAdmin` attribute as desired on your users:

![](admin-attribute-user.png)

When this user logs in, they will be flagged as a Gravwell admin:

![](admin-sso.png)
16 changes: 0 additions & 16 deletions ingesters/http.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,22 +70,6 @@ Multiple "Listener" definitions can be defined allowing specific URLs to send en
TokenValue=Secret
```

## Installation

If you're using the Gravwell Debian repository, installation is just a single apt command:

```
apt-get install gravwell-http-ingester
```

Otherwise, download the installer from the [Downloads page](/quickstart/downloads). Using a terminal on the Gravwell server, issue the following command as a superuser (e.g. via the `sudo` command) to install the ingester:

```console
root@gravserver ~ # bash gravwell_http_ingester_installer_3.0.0.sh
```

If the Gravwell services are present on the same machine, the installation script will automatically extract and configure the `Ingest-Auth` parameter and set it appropriately. However, if your ingester is not resident on the same machine as a pre-existing Gravwell backend, the installer will prompt for the authentication token and the IP address of the Gravwell indexer. You can set these values during installation or leave them blank and modify the configuration file in `/opt/gravwell/etc/gravwell_http_ingester.conf` manually.

## Configuring HTTPS

By default the HTTP Ingester runs a cleartext HTTP server, but it can be configured to run an HTTPS server using x509 TLS certificates. To configure the HTTP Ingester as an HTTPS server provide a certificate and key PEM files in the Global configuration space using the `TLS-Certificate-File` and `TLS-Key-File` parameters.
Expand Down
104 changes: 56 additions & 48 deletions ingesters/kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,24 +7,67 @@ myst:
---
# Kafka

The Kafka ingester designed to act as a consumer for [Apache Kafka](https://kafka.apache.org/) so that data Gravwell can attach to a Kafka cluster and consume data. Kafka can act as a high availability [data broker](https://kafka.apache.org/uses#uses_logs) to Gravwell. Kafka can take on some of the roles provided by the Gravwell Federator, or ease the burden of integrating Gravwell into an existing data flow. If your data is already flowing to Kafka, integrating Gravwell is just an `apt-get` away.
The Kafka ingester is designed to act as a consumer for [Apache Kafka](https://kafka.apache.org/) so that Gravwell can attach to a Kafka cluster and consume data. Kafka can act as a high-availability [data broker](https://kafka.apache.org/uses#uses_logs) to Gravwell. Kafka can take on some of the roles provided by the Gravwell Federator, or ease the burden of integrating Gravwell into an existing data flow. If your data is already flowing to Kafka, integrating Gravwell is just an `apt-get` away.

The Gravwell Kafka ingester is best suited as a co-located ingest point for a single indexer. If you are operating a Kafka cluster and a Gravwell cluster, it is best not to duplicate the load balancing characteristics of Kafka at the Gravwell ingest layer. Install the Kafka ingester on the same machine as the Gravwell indexer and use the Unix named pipe connection. Each indexer should be configured with its own Kafka ingester, this way the Kafka cluster can manage load balancing.
The Gravwell Kafka ingester is best suited as a co-located ingest point for a single indexer. If you are operating a Kafka cluster and a Gravwell cluster, it is best not to duplicate the load balancing characteristics of Kafka at the Gravwell ingest layer. Each indexer should be configured with its own Kafka ingester, allowing the Kafka cluster to manage load balancing; install the Kafka ingester on the same machine as the Gravwell indexer and use the Unix named pipe connection for communication with the indexer.

Most Kafka configurations enforce a data durability guarantee, which means data is stored in non-volatile storage when consumers are not available to consume it. As a result we do not recommend that the Gravwell ingest cache be enabled on Kafka ingester, instead let Kafka provide the data durability.
Most Kafka configurations enforce a data durability guarantee, which means data is stored in non-volatile storage when consumers are not available to consume it. As a result we do not recommend that the Gravwell ingest cache be enabled on Kafka ingester; instead, let Kafka provide the data durability.

## Installation

```{include} installation_instructions_template
```

## Basic Configuration
## Configuration

The Kafka ingester uses the unified global configuration block described in the [ingester section](ingesters_global_configuration_parameters). Like most other Gravwell ingesters, the Kafka Ingester supports multiple upstream indexers, TLS, cleartext, and named pipe connections, a local cache, and local logging.

The configuration file is at `/opt/gravwell/etc/kafka.conf`. The ingester will also read configuration snippets from its [configuration overlay directory](configuration_overlays) (`/opt/gravwell/etc/kafka.conf.d`).

## Consumer Examples
### Consumer Configurations

The Gravwell Kafka ingester can subscribe to multiple topics and even multiple Kafka clusters. Each consumer defines a consumer block with a few key configuration values.

The following parameters configure the connection to the Kafka cluster:

| Parameter | Type | Descriptions | Required |
|-----------|------|--------------| -------- |
| Leader | host:port | The Kafka cluster leader/broker. This should be an IP or hostname, if no port is specified the default port of 9092 is appended | YES |
| Topic | string | The Kafka topic this consumer will read from | YES |
| Consumer-Group | string | The Kafka consumer group this ingester is a member of; default is `gravwell`. |
| Rebalance-Strategy | string | The re-balancing strategy to use when reading from Kafka. Options are `roundrobin` (default), `sticky`, and `range`. |
| Auth-Type | string | Enable SASL authentiation and specify mechanism. |
| Username | string | Specify username for SASL authentication. |
| Password | string | Specify password for SASL authentication. |
| Use-TLS | boolean | If set, the ingester will connect to the Kafka cluster using TLS. |
| Insecure-Skip-TLS-Verify | boolean | If TLS is in use, setting this parameter will make the ingester ignore invalid TLS certificates. |

These parameters configure how the ingester handles incoming data from Kafka:

| Parameter | Type | Descriptions | Required |
|-----------|------|--------------| -------- |
| Default-Tag | string | Entries which do not receive a tag from the `Tag-Header` will be assigned this default tag. | YES |
| Tag-Header | string | If set, the ingester will look at the specified header to determine into which tag the entry should be ingested. If the header is not set on the message, the `Default-Tag` will be used. By default, `Tag-Header` is set to "TAG". |
| Tags | string | Specifies a list of allowable tags (or wildcard patterns) for the `Tag-Header`, e.g. `Tags=gravwell,foo,b*r`. Any entry with a tag which does not match one of the patterns will instead be assigned the `Default-Tag`. |
| Source-Header | string | Gravwell producers will often put the data source address in a message header, if set the ingester will attempt to interpret the given header as a Source address. If the header is not correct the ingester will apply the source override (if set) or the default source. |
| Source-As-Binary | boolean | If set, the ingester will assume that the contents of the `Source-Header` are in binary format, rather than a string. |
| Synchronous | boolean | If set, the ingester will perform a sync on the ingest connection every time a Kafka batch is written. |
| Batch-Size | integer | The number of entries to read from Kafka before forcing a write to the ingest connection; the default is 512. |

These parameters give some standard Gravwell ingester configuration options related to timestamps, timezones, and the source field. See the [general ingester configuration page](/ingesters/ingesters) for more information about these parameters.

| Parameter | Type | Descriptions | Required |
|-----------|------|--------------| -------- |
| Source-Override | IPv4 or IPv6 | An IP address to use as the SRC for all entries. |
| Ignore-Timestamps | boolean | If set, the ingester will apply the current timestamp to all received entries, ignoring Kafka timestamps. |
| Extract-Timestamps | boolean | If set, the ingester will ignore the Kafka timestamps and attempt to extract a timestamp from the entry's contents. |
| Assume-Local-Timezone | boolean | If set, when extracting timestamps from entries the timezone will be assumed to be local, if not explicitly set. |
| Timezone-Override | string | If set, timestamps will be parsed in the given timezone, e.g. "America/New_York". |
| Timestamp_Format_Override | string | Specifies a timestamp format, e.g. "RFC822", to use when parsing timestamps. |

As with most ingesters, each consumer may also specify [preprocessors](/ingesters/preprocessors/preprocessors) if needed.

### Consumer Examples

```
[Consumer "default"]
Expand All @@ -35,55 +78,20 @@ The configuration file is at `/opt/gravwell/etc/kafka.conf`. The ingester will a
Tag-Header=TAG #look for the tag in the Kafka TAG header
Source-Header=SRC #look for the source in the Kafka SRC header

# This consumer does not specify a Tags parameter, so all entries will get the Default-Tag
[Consumer "test"]
Leader="127.0.0.1:9092"
Tag-Name=test
Default-Tag=test
Topic=test
Consumer-Group=mygroup
Synchronous=true
Key-As-Source=true #A custom feeder is putting its source IP in the message key value
Header-As-Source="TS" #look for a header key named TS and treat that as a source
Source-As-Text=true #the source value is going to come in as a text representation
Source-Header=SRC #A custom feeder is putting its source IP in the header named "SRC"
Batch-Size=256 #get up to 256 messages before consuming and pushing
Rebalance-Strategy=roundrobin
Rebalance-Strategy=sticky
```

## Installation

The Kafka ingester is available in the Gravwell Debian repository as a Debian package as well as a shell installer on our [Downloads page](/quickstart/downloads). Installation via the repository is performed using `apt`:

```
apt-get install gravwell-kafka
```

The shell installer provides support for any non-Debian system that uses systemd, including Arch, Redhat, Gentoo, and Fedora.

```console
root@gravserver ~ # bash gravwell_kafka_installer.sh
```

## Configuration

The Gravwell Kafka ingester can subscribe to multiple topics and even multiple Kafka clusters. Each consumer defines a consumer block with a few key configuration values.


| Parameter | Type | Descriptions | Required |
|-----------|------|--------------| -------- |
| Tag-Name | string | The Gravwell tag that data should be sent to. | YES |
| Leader | host:port | The Kafka cluster leader/broker. This should be an IP or hostname, if no port is specified the default port of 9092 is appended | YES |
| Topic | string | The Kafka topic this consumer will read from | YES |
| Consumer-Group | string | The Kafka consumer group this ingester is a member of | NO - default is `gravwell` |
| Source-Override | IPv4 or IPv6 | An IP address to use as the SRC for all entries | NO |
| Rebalance-Strategy | string | The re-balancing strategy to use when reading from Kafka | NO - default is `roundrobin`. `sticky`, and `range` are also options |
| Key-As-Source | boolean | Gravwell producers will often put the data source address in a message key, if set the ingester will attempt to interpret the message key as a Source address. If the key structure is not correct the ingester will apply the override (if set) or the default source. | NO - default is false |
| Synchronous | boolean | The ingester will perform a sync on the ingest connection every time a Kafka batch is written. | NO - default is false |
| Batch-Size | integer | The number of entries to read from Kafka before forcing a write to the ingest connection | NO - default is 512 |
| Auth-Type | string | Enable SASL authentiation and specify mechanism |
| Username | string | Specify username for SASL authentication |
| Password | string | Specify password for SASL authentication |

```{warning}
Setting any consumer as synchronous causes that consumer to continually Sync the ingest pipeline. It will have significant performance implications for ALL consumers.
Setting any consumer as synchronous causes that consumer to continually sync the ingest pipeline. It will have significant performance implications for ALL consumers.
```

```{note}
Expand Down Expand Up @@ -125,16 +133,16 @@ Log-File=/opt/gravwell/log/kafka.log

[Consumer "default"]
Leader="tasks.kafka.internal"
Tag-Name=default
Default-Tag=default
Tags=*
Topic=default
Consumer-Group=gravwell1
Key-As-Source=true
Batch-Size=256


[Consumer "test"]
Leader="tasks.testcluster.internal:9092"
Tag-Name=test
Default-Tag=test
Topic=test
Consumer-Group=testgroup
Source-Override="192.168.1.1"
Expand Down
2 changes: 1 addition & 1 deletion ingesters/win_file_follow.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Download the Gravwell Windows File Follower installer:

| Ingester Name | Installer | More Info |
| :------------ | :----------- | :-------- |
| Windows File Follower | <a data-bs-custom-class="hash-popover" href="https://update.gravwell.io/archive/5.4.10/installers/gravwell_file_follow_5.4.10.1.msi">Download <i class="fa-solid fa-download"></i></a>&nbsp;&nbsp;&nbsp;<a data-bs-custom-class="hash-popover" href="javascript:void(0);" data-bs-toggle="popover" data-bs-placement="bottom" data-bs-html="true" data-bs-content='<code class="docutils literal notranslate"><span class="pre">105ed773fd5df5a33afcdeebe8c43de6a607dcbe659b79b05aaabb5515785830</span></code>'>(SHA256)</a> | [Documentation](/ingesters/win_file_follow) |
| Windows File Follower | <a data-bs-custom-class="hash-popover" href="https://update.gravwell.io/archive/5.4.10/installers/gravwell_file_follow_5.4.10.3.msi">Download <i class="fa-solid fa-download"></i></a>&nbsp;&nbsp;&nbsp;<a data-bs-custom-class="hash-popover" href="javascript:void(0);" data-bs-toggle="popover" data-bs-placement="bottom" data-bs-html="true" data-bs-content='<code class="docutils literal notranslate"><span class="pre">864afccaa583ebefb7c9cbc0310159e83f4c305255c54f395f14f67c85145c7f</span></code>'>(SHA256)</a> | [Documentation](/ingesters/win_file_follow) |

The Gravwell Windows file follower is installed using a signed MSI package. Gravwell signs both the Windows executable and MSI installer with our private key pairs, but depending on download volumes, you may see a warning about the MSI being untrusted. This is due to the way Microsoft "weighs" files. Basically, as they see more people download and install a given package, it becomes more trustworthy. Don't worry though, we have a well audited build pipeline and we sign every package.

Expand Down
2 changes: 1 addition & 1 deletion ingesters/winevent.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ Download the Gravwell Windows Events installer:

| Ingester Name | Installer | More Info |
| :------------ | :----------- | :-------- |
| Windows Events | <a data-bs-custom-class="hash-popover" href="https://update.gravwell.io/archive/5.4.10/installers/gravwell_win_events_5.4.10.1.msi">Download <i class="fa-solid fa-download"></i></a>&nbsp;&nbsp;&nbsp;<a data-bs-custom-class="hash-popover" href="javascript:void(0);" data-bs-toggle="popover" data-bs-placement="bottom" data-bs-html="true" data-bs-content='<code class="docutils literal notranslate"><span class="pre">67f2c5be57257125535528088a1fd5836789eaac11291b533c28110d56a820a6</span></code>'>(SHA256)</a> | [Documentation](/ingesters/winevent) |
| Windows Events | <a data-bs-custom-class="hash-popover" href="https://update.gravwell.io/archive/5.4.10/installers/gravwell_win_events_5.4.10.3.msi">Download <i class="fa-solid fa-download"></i></a>&nbsp;&nbsp;&nbsp;<a data-bs-custom-class="hash-popover" href="javascript:void(0);" data-bs-toggle="popover" data-bs-placement="bottom" data-bs-html="true" data-bs-content='<code class="docutils literal notranslate"><span class="pre">dc875345b58957d630933e3009a0fd13873764f9e6ecb353381ce6cf835be89d</span></code>'>(SHA256)</a> | [Documentation](/ingesters/winevent) |

Run the .msi installation wizard to install the Gravwell events service. On first installation the installation wizard will prompt to configure the indexer endpoint and ingest secret. Subsequent installations and/or upgrades will identify a resident configuration file and will not prompt.

Expand Down
Loading